• The Virtual World

    Docker Competitor for Windows Server

    A Docker competitor? Sounds interesting. I have always said that this was the future!

    Docker Competitor Puts Windows Server Into Containers

    Virtualization Review – By: Jeffrey Schwartz – “Containers are a hot technology, and the one currently burning brightest is Docker; it’s the default starting point for admins considering using containers in their enterprise. But as with any new tech with a lot of buzz, contenders spring up almost immediately. That’s happening in the Windows world.

    A little-known startup that offers data protection and SQL Server migration tools today released what it calls the first native container management platform for Windows Server, and claims it can move workloads between virtual machines (VMs) and cloud architectures. DH2i’s DX Enterprise encapsulates Windows Server application instances into containers, removing the association between the apps, data and the host operating systems connected to physical servers.

    The Fort Collins, Colo.-based company’s software is a lightweight 8.5 MB server installation that offers a native alternative to Linux-based Docker containers. At the same time, Microsoft and Docker are working on porting their containers to Windows, as announced last fall. In addition to its relationship with Microsoft, Docker has forged ties with all major infrastructure and cloud providers including Google, VMware and IBM. Docker and Microsoft are jointly developing a container technology that will work on the next version of Windows Server.

    In his TechDays briefing last week, Microsoft Distinguished Engineer Jeffrey Snover confirmed that the company will include support for Docker containers in the next Windows Server release, known as Windows vNext.

    DH2i president and CEO Don Boxley explained why he believes DX Enterprise is a better alternative to Docker, pointing to that fact that it’s purely Windows Server-based.

    ‘When you look at a Docker container and what they’re talking about with Windows containerization, those are services that they’re looking at then putting some isolation kind of activities in the future,’ Boxley said. ‘It’s a really important point that Docker’s containers are two containerized applications. Yet there are still going to be a huge amount of traditional applications simultaneously. We’ll be able to put any of those application containers inside of our virtual host and have stop-start ordering or any coordination that needs to happen between the old type of applications and the new and/or just be able to manage them in the exact same way. It forces them to be highly available and extends now to a containerized application.’

    The company’s containers, called ‘Vhosts,’ each have their own logical host name, associated IP addresses and portable native NTFS volumes. The Vhost’s metadata assigns container workload management, while directing the managed app to launch and run locally, according to the company. Each Vhost shares one Windows Server operating system instance, which are stacked on either virtual or physical servers. This results in a more consolidated way of managing application workloads and enabling instance portability, Boxley explained.

    Unlike Docker, there are ‘no companion virtual machines running Linux, or anything like that at all,’ Boxer said. ‘It’s just a native Windows application; you load it onto your server and you can start containerizing things right away. And again, because of that universality of our container technology, we don’t care whether or not the server is physical, virtual or running in the cloud. As long as it’s running Windows Server OS, you’re good to go. You can containerize applications in Azure and in Rackspace and Amazon, and if the replication data pipe is right, you can move those workloads around transparently.’ At the same time, Boxley said it will work with Docker containers in the future.

    Boxley said a customer can also transparently move workloads between any VM platform, including VMware, Hyper-V and Xen. ‘It really doesn’t matter because we’re moving the applications, not the machine or the OS,’ he said. Through its management console, it automates resource issues, including contention among containers. The management component also provides alerts and ensures applications are meeting SLAs.

    Asked why it chose Windows Server to develop DX Enterprise, Boxley said he believes it will remain the dominant environment for virtual applications. ‘We don’t think — we know it’s going to grow,’ he said. IDC analyst Al Gillen said that’s partly true, though Linux servers will grow in physical environments. Though he hasn’t tested DX Enterprise, Gillen said the demo looked promising. ‘For customers that have an application that they have to move and they don’t have the ability to port it, this is actually a viable solution for them,’ Gillen said.

    Boxley said the solution is also a viable option for organizations looking to migrate applications from Windows Server 2003, which Microsoft will no longer support as of July 14, to a newer environment. The software is priced at $1,500 per server core (if running on a VM, it can be licensed via the underlying core), regardless of the number of CPUs. Support, including patches, costs $360 per core per year.

    Boxley said the company is self-funded, and started out as a Microsoft BizSpark partner.”

    Published by:
  • The Virtual World

    The Future of Virtualization

    This is an interesting article about the future of virtualization. Check it out!

    Interesting Times Ahead in the Server Virtualization World

    ServerWatch – By: Paul Rubens – “VMware is the 800 pound gorilla in the world of server virtualization, dominating the market with more than 50% market share, according to Gartner’s Server Virtualization Tracker from last year.

    And it’s not been averse to flexing its muscles and splashing the cash over the last few years: VMware shelled out a whopping $1.54 billion for mobile device manager AirWatch, Virtually Speaking $1.26 billion for software-defined networking company Nicira, and an undisclosed but no doubt substantial sum for desktop-as-a-service company Desktone.

    VMware itself was bought by EMC in 2003 for what looks today like a steal: a mere trifling $624 million. But at today’s share price VMware has a market capitalization of about $33 billion. (EMC still owns about 80% of VMware’s stock, which probably depresses the true worth of the company slightly.)

    So VMware is a company that has grown hugely over the last dozen years. In the quarter just ended VMware announced it had made sales of $1.7 billion, bringing annual revenue for 2014 above $6 billion for the first time in the company’s history.

    Size Is All Relative

    Six billion dollars in revenues in one year certainly seems like a lot of moolah. Until you start looking at the really big gorillas in other tech sectors, that is. Then you see that VMware is relatively nothing more than a 98 pound weakling.

    Take Microsoft, for example. Microsoft is a big player in the server virtualization space with around a 30% market share (maybe a bit more), but it also has dominant interests in Windows operating systems, productivity software, cloud operations (as does VMware) and much more.

    Microsoft’s second quarter sales to the end of 2014 were $26.47 billion. That’s just for the quarter, not the year, remember. These figures dwarf VMware’s puny $1.7 billion in sales.

    And what about Apple? It’s more of a consumer hardware company than a business software company, but Microsoft and (to an extent) VMware sell hardware too.

    Apple’s quarterly revenues to December 27, 2014 came to $74.6 billion, and net profits were $18 billion — figures that make VMware’s pale in significance.

    To put that in to some sort of perspective, VMware’s sales for the whole of 2014 came to less than Apple’s sales by January 9th of that year or Microsoft’s by January 23rd. In fact Apple makes enough profit in six months to buy VMware outright (assuming of course that EMC would sell its holding) at the current share price. Heck, it could buy EMC with less than a year’s worth of profits.

    So just as AirWatch was a big fish in the MDM pool but just a guppy for a company the size of VMware, let’s not forget that VMware may be a big fish in the server virtualization pool, but Microsoft is a much bigger fish in the same pool, and Apple is a much bigger fish in a different pool altogether.

    Are we suggesting that Apple wants to own VMware? No, there’s nothing to suggest that it does. Or Microsoft? Again, no. That would be highly unlikely: it already has a healthy growing virtualization business based around Hyper-V.

    But might Apple ever want to own VMware? It’s possible. Virtualization technologies and particularly related cloud technologies are likely to become more important for companies such as Apple over the coming years.

    Looking Beyond Microsoft and Apple

    What about one of the other big technology leviathans like Oracle, IBM and so on? Stranger things have certainly happened.

    Would EMC ever sell its stake in VMware? It is something that’s being discussed, in particular by activist investor Elliott Management. And could EMC be bought outright, VMware and all? It’s been considered in the past, by the likes of Hewlett Packard, reportedly.

    So for now VMware may be the 800 pound gorilla in the virtualization market (or the biggest fish in the virtualization pool if you prefer.) But let’s not forget that there are much bigger gorillas out there, and VMware has something they may well decide that they want.

    Interesting times lie ahead in the virtualization world.”

    Published by:
  • The Virtual World

    VMware Acquires Immidio Flex

    I have used the Open Source version of Flex on Citrix XenApp. It is pretty cool.

    What the Immidio Purchase Means for VMware Horizon Customers

    Virtualization Review – By: Sean Massey – “On Feb. 3, VMware Inc. announced it had acquired Immidio, a user profile management company based out of the Netherlands. Immidio’s two main products are Flex+, an agent-based user profile management solution, and AppScriber, an enterprise app store.

    Immidio describes Flex+ as a workspace virtualization solution that provides customization of end-user devices and applications. The settings can be dynamically applied to a user session based on a number of criteria such as Windows version and IP address. It supports both the desktop and server versions of Windows, including Remote Desktop sessions.

    VMware has previously attempted to address user profile management with View Persona Management. This feature provides more options than the Microsoft Roaming Profiles feature, but it’s also very limited. It doesn’t support profile management for applications published from Windows Server, and although VMware supported desktops running Windows Server 2008 R2 or Windows 8.1 in version 5.3, View Persona Management didn’t support them until Horizon View 6.0 was released.

    Effective user profile management is one of the major challenges of any virtual end-user computing infrastructure. Profiles contain the user’s personalization and application settings, and these need to follow a user to wherever they log in -– especially in non-persistent VDI and server-based computing environments, where users may not end up on the same machine after every log in.

    The Immidio acquisition will allow VMware to address the shortcomings of View Persona Management while providing cross-platform support for user and application settings. It also gives VMware an enterprise app store that they can tie in with App Volumes to allow users to provision their own access to applications in VDI and RDSH environments.”

    Published by:
  • The Virtual World

    Citrix Acquires Sanbolic

    Citrix adds storage to their portfolio.

    Citrix snaps up Sanbolic to ease desktop virtualization

    GigaOm – By: Barb Darrow – “Citrix, which has led the charge for desktop virtualization, just acquired Sanbolic to help bleed out the complexity and cost that have kept virtual desktop infrastructure (VDI) from broader adoption. Terms were not announced.

    Desktop virtualization separates what runs on a computer desktop from the physical computer itself so management, updates and patches can be more easily accomplished by a central administrator.

    Sanbolic, of Waltham, Mass., specializes in software defined storage which works with a wide array of existing storage hardware. It was already a close Citrix partner; Citrix said 200 of its existing XenDesktop XenApp customers already use Sanbolic in house to attain high-availability and to manage infrastructure across regions.

    With Sanbolic in-house, Citrix can develop pre-packaged and pre-tested solutions to ‘help drive down the cost and complexity of VDI and application delivery deployments in a linear and predictable manner,’ Sanbolic CEO Momchil Michailov said via email. Sanbolic, he added, enables customers to keep using existing storage arrays and infrastructure whether on-premise or in the cloud, including appliances for Amazon Web Services (AWS), IBM Softlayer, Rackspace and Microsoft Azure.

    Michailov, who becomes Citrix’s VP of storage technologies, and Sanbolic’s other 30 employees will move to Citrix, according to a spokeswoman.

    Citrix has been the standard bearer for desktop virtualization but has seen increased competition from VMware, the leader in server virtualization, which has juiced its efforts in this area over the past few years. But, both virtualization vendors are seeing increased competition from other software companies, including platform providers which are doing more of their own virtualization work.”

    Published by:
  • Product News

    Fedora 21 Looks Cool for Cloud Users

    Cloud developers will love this new Fedora!

    6 new things Fedora 21 brings to the open source cloud

    OpenSource.com – By: Jason Baker – “When Fedora 21 finally hit release last month, I was excited and ready to go. By the end of the day, I had every desktop machine I own up and running on the new version, and I was enjoying playing with the latest version of some of my favorite open source software which was packaged inside. But what next?

    The desktop edition of Fedora 21 was just one of three “flavors” of Fedora. What do the other two hold, and what do they mean for Fedora outside of the workstation?

    A flavor just for cloud

    So what is this flavor thing? For the newest release, the Fedora Project split the distribution into three different focuses: A version for desktop users, focused on workstation usage; a server edition, focused on traditional infrastructure needs; and a cloud image, for those who want to use Fedora in a virtualized environment. The cloud flavor packs only the essentials and is packaged to be deployed on your favorite cloud of choice.

    A smaller image size

    When you’re talking about paying for space for hundreds of virtual machines, or even just waiting for a configured image to upload from your local machine to your cloud environment, size matters. The Fedora maintainers made strong progress bringing down the base size of the Fedora cloud image for the new release. Cloud images now clock in at a 10% smaller size than the previous release, with a qcow2 formatted version under 200MB.

    An atomic image

    Atomic Cloud is now available in Fedora 21. Atomic makes Fedora work better with Linux container projects, like Docker, by creating a way to roll back upgrades as a group that can be easily rolled back if something goes wrong. This, combined with tools for easier management and orchestration of container-based applications, make Atomic a great host for containerized applications. For more on Atomic and what it means for both Fedora and the cloud, visit the Project Atomic homepage.

    New workstation tools for cloud developers

    Some of what was exciting for cloud developers wasn’t in the cloud flavor at all, but in the workstation. Fedora 21 comes with a new tool called DevAssistant, which makes getting started with creating new development projects easier than ever. DevAssistant pulls together all of the key parts needed to deploy an application, sets up your directories, downloads any needed dependencies, and puts everything together for you in one easy pacakge. If you’re writing software for the cloud, you’ll want to check it out.

    You in the cockpit

    Another great tool shipping in Fedora is called Cockpit. Cockpit is a management console that makes it easy for you to manage multiple Linux servers via a web browser. It’s a great tool for beginner system administrators to perform simple tasks like administering storage and starting and stopping services. While not as versatile as some other solution, Cockpit is easy to learn and easy to use.

    A new OpenStack

    Finally, Fedora 21 is set up for a new version of OpenStack. By default, Fedora 21 is designed to work with OpenStack Icehouse, released last year, but you can also try it out with latest Juno release via RDO.

    You also might be interested in checking out the official release notes for additional information about some of the features found in Fedora 21, and in particular what they mean for those operating or doing development in a cloud environment.”

    Published by:
  • VirtZine - Video Netcasts

    VirtZine #48 – Video – “Alternative Virtualization Platforms!”


    Dr. Bill discusses alternative hypervisors, and specialized Linux distros for virtualization, including a demo of Proxmox VE 3.3. Also, CoreOS, a slimmed-down Linux distro for creation of many OS instances. oVirt manages VMs with an easy web interface.

    (Click on the buttons below to Stream the Netcast in your “format of choice”)
    Streaming M4V Video
     Download M4V
    Streaming WebM Video
     Download WebM
    Streaming MP3 Audio
     Download MP3
    (“Right-Click” on the “Download” link under the format button
    and choose “Save link as…” to save the file locally on your PC)

    Subscribe to Our Video Feed

    Subscribe to Our Audio Feed

    Also available on YouTube:

    Published by:
  • VirtZine - Audio Netcasts

    VirtZine #48 – Audio – “Alternative Virtualization Platforms!”

    Dr. Bill discusses alternative hypervisors, and specialized Linux distros for virtualization, including a demo of Proxmox VE 3.3. Also, CoreOS, a slimmed-down Linux distro for creation of many OS instances. oVirt manages VMs with an easy web interface.

    (Click on the buttons below to Stream the Netcast in your “format of choice”)
    Streaming M4V Video
     Download M4V
    Streaming WebM Video
     Download WebM
    Streaming MP3 Audio
     Download MP3
    (“Right-Click” on the “Download” link under the format button
    and choose “Save link as…” to save the file locally on your PC)

    Subscribe to Our Video Feed

    Subscribe to Our Audio Feed

    Also available on YouTube:

    Published by:
  • The Virtual World

    oVirt – Open Your Virtual Datacenter!

    oVirt is a Linux distro that you can install on bare-metal hardware and provide a platformn for virtualizing your systems!

    oVirt Distro

    “oVirt manages virtual machines, storage and virtualized networks. oVirt is a virtualization platform with an easy-to-use web interface. oVirt is powered by the Open Source you know – KVM on Linux.

    Choice of stand-alone Hypervisor or install-on-top of your existing Linux installation

    • High availability
    • Live migration
    • Load balancing
    • Web-based management interface
    • Self-hosted engine
    • iSCSI, FC, NFS, and local storage
    • Enhanced security: SELinux and Mandatory Access Control for VMs and hypervisor
    • Scalability: up to 64 vCPU and 2TB vRAM per guest
    • Memory overcommit support (Kernel Samepage Merging)
    • Developer SDK for ovirt-engine, written in Python”
    Published by:
  • The Virtual World

    CoreOS: A Distro for Virtualization!

    CoreOS is a thin virtualization specific Linux Distro that you should look into!

    CoreOS: A lean, mean virtualization machine

    NetworkWorld – By Tom Henderson – CoreOS is a slimmed-down Linux distribution designed for easy creation of lots of OS instances. We like the concept.

    CoreOS uses Docker to deploy applications in virtual containers; it also features a management communications bus, and group instance management.

    Rackspace, Amazon Web Services (AWS), GoogleComputeEngine (GCE), and Brightbox are early cloud compute providers compatible with CoreOS and with specific deployment capacity for CoreOS. We tried Rackspace and AWS, and also some local ‘fleet’ deployments.

    CoreOS is skinny. We questioned its claims of less overall memory used, and wondered if it was stripped to the point of uselessness. We found that, yes, it saves a critical amount of memory (for some), and no, it’s tremendously Spartan, but pretty useful in certain situations.

    CoreOS has many similarities with Ubuntu. They’re both free and GPLv3 licensed. Ubuntu 14.04 and CoreOS share the same kernel. Both are easily customizable, and no doubt you can make your own version. But CoreOS shuns about half of the processes that Ubuntu attaches by default.

    If you’re a critic of the bloatware inside many operating systems instances, CoreOS might be for you. In testing, we found it highly efficient. It’s all Linux kernel-all-the-time, and if your organization is OS-savvy, you might like what you see in terms of performance and scale.

    Security could be an issue

    CoreOS uses curl for communications and SSL, and we recommend adding a standard, best-practices external SSL certificate authority for instance orchestration. Otherwise, you’ll be madly generating and managing SSL relationships among a dynamic number of instances. CoreOS sends updates using signed certificates, too.

    With this added SSL security control, your ability to scale efficiently is but a few scripts away. Here’s the place where your investment in SSL certs and chains of authority back to a root cert is a good idea. It adds to the overhead, of course, to use SSL for what might otherwise be considered “trivial” instances. All the bits needed for rapid secure communications with SSL are there, and documented, and wagged in your face. Do it.

    What You Get

    CoreOS is a stripped-down Linux distro designed for rapidly deployed Spartan instance use. The concept is to have a distro that’s bereft of the usual system memory and daemon leeches endemic to popular distributions and ‘ecosystems.” This is especially true as popular distros get older and more “feature packed”.

    More available memory usually means more apps that can be run, and CoreOS is built to run them in containers. Along with its own communications bus—primitive as it is— you get to run as many instances (and apps) as possible with the least amount of overhead and management drama.

    For those leaning towards containerized instances, it blasts them in a controlled procedure, then monitors them for health. It’s not tough to manage the life cycle of a CoreOS instance. RESTful commands do much of the heavy lifting.

    Inside CoreOS is a Linux kernel, LXC capacity, and the etcd/etcd daemon service discovery/control daemon, along with Docker, the application containerization system, and systemd—the start/stop process controller that’s replaced various initd (initial daemon) in many distros.

    There is multiple instance management using fleet—a key benefit for those primarily starting pools and even oceans of instances of apps/OS instances on a regular basis.

    Like Ubuntu and RedHat, it uses the systemd daemon as an interface control mechanism, and it’s up to date with the same kernel used by Ubuntu 14.04 and RedHat EL7. Many of your updated systemd-based scripts will work without changes.

    The fleetd is controlled by the user space command fleetctl and it instantiates processes, and the etcd daemon is a service discovery (like a communications bus) using etcdctl for monitoring—all at a low level and CLI-style.

    The etcd is used to accept REST commands, using simple verbs. It uses a RESTful API set, and it’s not Puppet, Chef, or other service bus communications bus controller, but a lean/tight communications methodology. It works and is understandable by Unix/Linux coders and admins.

    A downside is that container and instance sprawl become amazingly easy. You can fire instances, huge number of them, at will. There aren’t any clever system-wide monitoring mechanisms that will warn you that your accounting department will simply explode when they see your sprawl bill on AWS or GCE. Teardown isn’t enforced—but it’s not tough to do.

    We did a test to determine the memory differences between Ubuntu 14.04 and CoreOS, configuring each OS as 1GB memory machines on the same platform. They reported the same kernel (Linux 3.12), and were used with default settings.

    We found roughly 28% to 44% more memory available for apps with CoreOS — before “swap” started churning the CPU/memory balances within the state machine.

    This means an uptake in speed of execution for apps until they need I/O or other services, less memory churn and perhaps greater cache hits. Actual state machine performance improvements are dependent on how the app uses the host but we feel that the efficiencies of memory use and overall reduction in bloat (and security attack surface potential) are worth the drill.

    These results were typical across AWS, GCE, and our own hosted platform that ran on a 60-core HP DL-580 Gen8. The HP server used could probably handle several hundred instances if we expanded the server’s memory to its 6TB max—not counting Docker instances.

    We could easily bring up a fleet of CoreOS instances, control it, feed it containers with unique IDs and IPs, make the containers do work (we did not exercise the containers), then shut them down, mostly with shell scripts rather than direct commands.

    The suggested scripts serve as a template, and more templates are appearing, that allowed us to easily replicate functionality, so as to manage sprawl. If you’re looking for instrumentation, get some glitzy UI elsewhere, and the same goes for high-vocabulary communications infrastructure.

    Once you start adding daemons and widgetry, you’re back to Ubuntu or RedHat.

    And we warn that we could also make mistakes that were unrecoverable with equally high speed, and remind that there aren’t any real safeguards except syntax checking, and the broad use of SSL keys.

    You can make hundreds of OS instances, each with perhaps 100 Docker container apps all moving hopefully in a harmonious way. Crypt is used, which means you need your keys ready to submit to become su/root. Otherwise, you’re on your own.


    This is a skinny instance, bereft of frills and daemons-with-no-use. We found more memory and less potential for speed-slowing memory churn. Fewer widgets and daemonry also means a smaller attack surface. No bloat gives our engineer’s instinctual desire to match resources with needs—no more and no less—more glee.

    CoreOS largely means self-support, your own instrumentation, plentiful script building, and liberation from the pomposity and fatuousness of highly featured, general purpose, compute-engines.

    Like to rough it? Want more real memory for actual instances? Don’t mind a bit of heavy lifting? CoreOS might be for you.”

    Published by:
  • Product News

    Proxmox VE 3.3

    Cross-posted from Dr. Bill.TV

    Proxmox VE 3.3

    “Proxmox VE is a complete open source virtualization management solution for servers. It is based on KVM virtualization and container-based virtualization and manages virtual machines, storage, virtualized networks, and HA Clustering.

    The enterprise-class features and the intuitive web interface are designed to help you increase the use of your existing resources and reduce hardware cost and administrating time – in business as well as home use. You can easily virtualize even the most demanding Linux and Windows application workloads.

    Powerful and Lightweight

    Proxmox VE is open source software, optimized for performance and usability. For maximum flexibility, we implemented two virtualization technologies – Kernel-based Virtual Machine (KVM) and container-virtualization.

    Open Source

    VE uses a Linux kernel and is based on the Debian GNU/Linux Distribution. The source code of Proxmox VE is released under the GNU Affero General Public License, version 3 (GNU AGPL, v3). This means that you are free to inspect the source code at any time or contribute to the project yourself.

    Using open source software guarantees full access to all functionalities – as well as high security and reliability. Everybody is encouraged to contribute while Proxmox ensures the product always meets professional quality criteria.

    Kernel-based Virtual Machine (KVM)

    Open source hypervisor KVM is a full virtualization solution for Linux on x86 hardware containing virtualization extensions (Intel VT or AMD-V). It is a kernel module added to mainline Linux.

    With KVM you can run multiple virtual machines by running unmodified Linux or Windows images. It enables users to be agile by providing robust flexibility and scalability that fit their specific demands. Proxmox Virtual Environment uses KVM virtualization since the beginning at 2008, since 0.9beta2.

    Container-based virtualization

    OpenVZ is container-based virtualization for Linux. OpenVZ creates multiple secure, isolated Linux containers (otherwise known as VEs or VPSs) on a single physical server enabling better server utilization and ensuring that applications do not conflict. Proxmox VE uses OpenVZ virtualization since the beginning of the project in 2008.”

    Published by: