Nov 142014
 
PlayPlay

Dr. Bill discusses alternative hypervisors, and specialized Linux distros for virtualization, including a demo of Proxmox VE 3.3. Also, CoreOS, a slimmed-down Linux distro for creation of many OS instances. oVirt manages VMs with an easy web interface.


(Click on the buttons below to Stream the Netcast in your “format of choice”)
Streaming M4V Video
 Download M4V
Streaming WebM Video
 Download WebM
Streaming MP3 Audio
 Download MP3
(“Right-Click” on the “Download” link under the format button
and choose “Save link as…” to save the file locally on your PC)

Subscribe to Our Video Feed
http://www.virtzine.com/category/netcast/feed

Subscribe to Our Audio Feed
http://www.virtzine.com/category/audio/feed


Also available on YouTube:
http://youtu.be/-S2x1rULX38


Nov 142014
 

Dr. Bill discusses alternative hypervisors, and specialized Linux distros for virtualization, including a demo of Proxmox VE 3.3. Also, CoreOS, a slimmed-down Linux distro for creation of many OS instances. oVirt manages VMs with an easy web interface.


(Click on the buttons below to Stream the Netcast in your “format of choice”)
Streaming M4V Video
 Download M4V
Streaming WebM Video
 Download WebM
Streaming MP3 Audio
 Download MP3
(“Right-Click” on the “Download” link under the format button
and choose “Save link as…” to save the file locally on your PC)

Subscribe to Our Video Feed
http://www.virtzine.com/category/netcast/feed

Subscribe to Our Audio Feed
http://www.virtzine.com/category/audio/feed


Also available on YouTube:
http://youtu.be/-S2x1rULX38


Nov 062014
 

oVirt is a Linux distro that you can install on bare-metal hardware and provide a platformn for virtualizing your systems!

oVirt Distro

“oVirt manages virtual machines, storage and virtualized networks. oVirt is a virtualization platform with an easy-to-use web interface. oVirt is powered by the Open Source you know – KVM on Linux.

Choice of stand-alone Hypervisor or install-on-top of your existing Linux installation

  • High availability
  • Live migration
  • Load balancing
  • Web-based management interface
  • Self-hosted engine
  • iSCSI, FC, NFS, and local storage
  • Enhanced security: SELinux and Mandatory Access Control for VMs and hypervisor
  • Scalability: up to 64 vCPU and 2TB vRAM per guest
  • Memory overcommit support (Kernel Samepage Merging)
  • Developer SDK for ovirt-engine, written in Python”
Nov 052014
 

CoreOS is a thin virtualization specific Linux Distro that you should look into!

CoreOS: A lean, mean virtualization machine

NetworkWorld – By Tom Henderson – CoreOS is a slimmed-down Linux distribution designed for easy creation of lots of OS instances. We like the concept.

CoreOS uses Docker to deploy applications in virtual containers; it also features a management communications bus, and group instance management.

Rackspace, Amazon Web Services (AWS), GoogleComputeEngine (GCE), and Brightbox are early cloud compute providers compatible with CoreOS and with specific deployment capacity for CoreOS. We tried Rackspace and AWS, and also some local ‘fleet’ deployments.

CoreOS is skinny. We questioned its claims of less overall memory used, and wondered if it was stripped to the point of uselessness. We found that, yes, it saves a critical amount of memory (for some), and no, it’s tremendously Spartan, but pretty useful in certain situations.

CoreOS has many similarities with Ubuntu. They’re both free and GPLv3 licensed. Ubuntu 14.04 and CoreOS share the same kernel. Both are easily customizable, and no doubt you can make your own version. But CoreOS shuns about half of the processes that Ubuntu attaches by default.

If you’re a critic of the bloatware inside many operating systems instances, CoreOS might be for you. In testing, we found it highly efficient. It’s all Linux kernel-all-the-time, and if your organization is OS-savvy, you might like what you see in terms of performance and scale.

Security could be an issue

CoreOS uses curl for communications and SSL, and we recommend adding a standard, best-practices external SSL certificate authority for instance orchestration. Otherwise, you’ll be madly generating and managing SSL relationships among a dynamic number of instances. CoreOS sends updates using signed certificates, too.

With this added SSL security control, your ability to scale efficiently is but a few scripts away. Here’s the place where your investment in SSL certs and chains of authority back to a root cert is a good idea. It adds to the overhead, of course, to use SSL for what might otherwise be considered “trivial” instances. All the bits needed for rapid secure communications with SSL are there, and documented, and wagged in your face. Do it.

What You Get

CoreOS is a stripped-down Linux distro designed for rapidly deployed Spartan instance use. The concept is to have a distro that’s bereft of the usual system memory and daemon leeches endemic to popular distributions and ‘ecosystems.” This is especially true as popular distros get older and more “feature packed”.

More available memory usually means more apps that can be run, and CoreOS is built to run them in containers. Along with its own communications bus—primitive as it is— you get to run as many instances (and apps) as possible with the least amount of overhead and management drama.

For those leaning towards containerized instances, it blasts them in a controlled procedure, then monitors them for health. It’s not tough to manage the life cycle of a CoreOS instance. RESTful commands do much of the heavy lifting.

Inside CoreOS is a Linux kernel, LXC capacity, and the etcd/etcd daemon service discovery/control daemon, along with Docker, the application containerization system, and systemd—the start/stop process controller that’s replaced various initd (initial daemon) in many distros.

There is multiple instance management using fleet—a key benefit for those primarily starting pools and even oceans of instances of apps/OS instances on a regular basis.

Like Ubuntu and RedHat, it uses the systemd daemon as an interface control mechanism, and it’s up to date with the same kernel used by Ubuntu 14.04 and RedHat EL7. Many of your updated systemd-based scripts will work without changes.

The fleetd is controlled by the user space command fleetctl and it instantiates processes, and the etcd daemon is a service discovery (like a communications bus) using etcdctl for monitoring—all at a low level and CLI-style.

The etcd is used to accept REST commands, using simple verbs. It uses a RESTful API set, and it’s not Puppet, Chef, or other service bus communications bus controller, but a lean/tight communications methodology. It works and is understandable by Unix/Linux coders and admins.

A downside is that container and instance sprawl become amazingly easy. You can fire instances, huge number of them, at will. There aren’t any clever system-wide monitoring mechanisms that will warn you that your accounting department will simply explode when they see your sprawl bill on AWS or GCE. Teardown isn’t enforced—but it’s not tough to do.

We did a test to determine the memory differences between Ubuntu 14.04 and CoreOS, configuring each OS as 1GB memory machines on the same platform. They reported the same kernel (Linux 3.12), and were used with default settings.

We found roughly 28% to 44% more memory available for apps with CoreOS — before “swap” started churning the CPU/memory balances within the state machine.

This means an uptake in speed of execution for apps until they need I/O or other services, less memory churn and perhaps greater cache hits. Actual state machine performance improvements are dependent on how the app uses the host but we feel that the efficiencies of memory use and overall reduction in bloat (and security attack surface potential) are worth the drill.

These results were typical across AWS, GCE, and our own hosted platform that ran on a 60-core HP DL-580 Gen8. The HP server used could probably handle several hundred instances if we expanded the server’s memory to its 6TB max—not counting Docker instances.

We could easily bring up a fleet of CoreOS instances, control it, feed it containers with unique IDs and IPs, make the containers do work (we did not exercise the containers), then shut them down, mostly with shell scripts rather than direct commands.

The suggested scripts serve as a template, and more templates are appearing, that allowed us to easily replicate functionality, so as to manage sprawl. If you’re looking for instrumentation, get some glitzy UI elsewhere, and the same goes for high-vocabulary communications infrastructure.

Once you start adding daemons and widgetry, you’re back to Ubuntu or RedHat.

And we warn that we could also make mistakes that were unrecoverable with equally high speed, and remind that there aren’t any real safeguards except syntax checking, and the broad use of SSL keys.

You can make hundreds of OS instances, each with perhaps 100 Docker container apps all moving hopefully in a harmonious way. Crypt is used, which means you need your keys ready to submit to become su/root. Otherwise, you’re on your own.

Summary

This is a skinny instance, bereft of frills and daemons-with-no-use. We found more memory and less potential for speed-slowing memory churn. Fewer widgets and daemonry also means a smaller attack surface. No bloat gives our engineer’s instinctual desire to match resources with needs—no more and no less—more glee.

CoreOS largely means self-support, your own instrumentation, plentiful script building, and liberation from the pomposity and fatuousness of highly featured, general purpose, compute-engines.

Like to rough it? Want more real memory for actual instances? Don’t mind a bit of heavy lifting? CoreOS might be for you.”

Oct 202014
 

Cross-posted from Dr. Bill.TV

Proxmox VE 3.3

“Proxmox VE is a complete open source virtualization management solution for servers. It is based on KVM virtualization and container-based virtualization and manages virtual machines, storage, virtualized networks, and HA Clustering.

The enterprise-class features and the intuitive web interface are designed to help you increase the use of your existing resources and reduce hardware cost and administrating time – in business as well as home use. You can easily virtualize even the most demanding Linux and Windows application workloads.

Powerful and Lightweight

Proxmox VE is open source software, optimized for performance and usability. For maximum flexibility, we implemented two virtualization technologies – Kernel-based Virtual Machine (KVM) and container-virtualization.

Open Source

VE uses a Linux kernel and is based on the Debian GNU/Linux Distribution. The source code of Proxmox VE is released under the GNU Affero General Public License, version 3 (GNU AGPL, v3). This means that you are free to inspect the source code at any time or contribute to the project yourself.

Using open source software guarantees full access to all functionalities – as well as high security and reliability. Everybody is encouraged to contribute while Proxmox ensures the product always meets professional quality criteria.

Kernel-based Virtual Machine (KVM)

Open source hypervisor KVM is a full virtualization solution for Linux on x86 hardware containing virtualization extensions (Intel VT or AMD-V). It is a kernel module added to mainline Linux.

With KVM you can run multiple virtual machines by running unmodified Linux or Windows images. It enables users to be agile by providing robust flexibility and scalability that fit their specific demands. Proxmox Virtual Environment uses KVM virtualization since the beginning at 2008, since 0.9beta2.

Container-based virtualization

OpenVZ is container-based virtualization for Linux. OpenVZ creates multiple secure, isolated Linux containers (otherwise known as VEs or VPSs) on a single physical server enabling better server utilization and ensuring that applications do not conflict. Proxmox VE uses OpenVZ virtualization since the beginning of the project in 2008.”

Oct 092014
 
PlayPlay

VMware announces its own OpenStack distribution, VMware embraces Docker container virtualization, Red Hat’s CEO announces a shift from client-server to cloud computing!


(Click on the buttons below to Stream the Netcast in your “format of choice”)
Streaming M4V Video
 Download M4V
Streaming WebM Video
 Download WebM
Streaming MP3 Audio
 Download MP3
(“Right-Click” on the “Download” link under the format button
and choose “Save link as…” to save the file locally on your PC)

Subscribe to Our Video Feed
http://www.virtzine.com/category/netcast/feed

Subscribe to Our Audio Feed
http://www.virtzine.com/category/audio/feed


Also available on YouTube:
http://youtu.be/WcRJdNnv9VE


Oct 092014
 

VMware announces its own OpenStack distribution, VMware embraces Docker container virtualization, Red Hat’s CEO announces a shift from client-server to cloud computing!


(Click on the buttons below to Stream the Netcast in your “format of choice”)
Streaming M4V Video
 Download M4V
Streaming WebM Video
 Download WebM
Streaming MP3 Audio
 Download MP3
(“Right-Click” on the “Download” link under the format button
and choose “Save link as…” to save the file locally on your PC)

Subscribe to Our Video Feed
http://www.virtzine.com/category/netcast/feed

Subscribe to Our Audio Feed
http://www.virtzine.com/category/audio/feed


Also available on YouTube:
http://youtu.be/WcRJdNnv9VE


Oct 042014
 

Re-posted from Dr. Bill.TV: This is VERY interesting, they are betting the farm on Cloud Computing!

Red Hat CEO announces a shift from client-server to cloud computing

ZDNet – By: Steven J. Vaughan-Nichols – “Red Hat is in the midst of changing its image from a top Linux company to the future king of cloud computing. CEO Jim Whitehurst told me in 2011 that the Platform-as-a-Service (PaaS) cloud would be Red Hat’s future. Today in a blog posting, Whitehurst underlined this shift from Linux to OpenStack.

Whitehurst wrote:

Right now, we’re in the midst of a major shift from client-server to cloud-mobile. It’s a once-every-twenty-years kind of change. As history has shown us, in the early days of those changes, winners emerge that set the standards for that era – think Wintel in the client-server arena. We’re staring at a huge opportunity – the chance to become the leader in enterprise cloud, much like we are the leader in enterprise open source. The competition is fierce, and companies will have several choices for their cloud needs. But the prize is the chance to establish open source as the default choice of this next era, and to position Red Hat as the provider of choice for enterprises’ entire cloud infrastructure.

In case you haven’t gotten the point yet, Whitehurst states, ‘We want to be the undisputed leader in enterprise cloud.’ In Red Hat’s future, Linux will be the means to a cloud, not an end unto itself.

He’s not the only Linux leader who sees it that way. Mark Shuttleworth, Canonical and Ubuntu’s founder, agrees. If you read Shuttleworth’s blog, you’ll see he focuses far more on Ubuntu’s inroads into the cloud than, say, Ubuntu on the smartphone or tablet.

They both have excellent reasons for seeing it this way. With the exception of Microsoft Azure, all other cloud platforms rely on Linux and open source software. Amazon’s cloud services, for example, run on top of Red Hat Enterprise Linux.

So neither Linux leader is walking too far away from Linux. Shuttleworth, for example, is quite proud that Ubuntu is the leading Linux OS on OpenStack. Whitehurst was quick to note that ‘Red Hat Enterprise Linux is easily the best operating platform in the world, counting more than 90 percent of the Fortune 500 as customers.’

Linux leaders see a future where IT is based on Linux and the open source cloud. And if Whitehurst has his way, it will be a Red Hat-dominated future.”

Sep 162014
 

Well, we were wondering what Docker meant for VMware, now, it seems, VMware is OK with Docker!

VMware Embraces Docker Container Virtualization

eWeek – By: Sean Michael Kerner – “The open-source Docker container virtualization technology has a new ally today. VMware announced a new partnership with Docker, Google and Pivotal to enable container technology in VMware environments. The news was formally announced at the VMworld conference in a keynote by VMware CEO Pat Gelsinger and discussed in a follow-up press conference. Among the key value propositions of Docker is that unlike a virtual machine (VM) hypervisor like VMware’s ESX, each application does not require its own underlying operating system. VMware and Docker, however, need not necessarily be competitive technologies, and each can be used to complement the other. ‘The best way to deliver containers is through a virtual machine,’ Gelsinger said during his keynote.

The basic idea is that running Docker containers on VMware VMs offers enterprises the best of both worlds. Developers can embrace the rapidly moving Docker world with its benefits, while still being able to leverage existing VMware workflows.

One of the biggest backers of Docker containers is Google, which is also part of the new VMware/Docker partnership effort. Craig McLuckie, product manager at Google, said during a press conference that the new partnership will enable a way to bring the container style of application management to the world. Google sees VMs and Docker as being very complementary, he added.

‘The virtual machine offering provides a very strong way to provision and manage basic infrastructure,’ McLuckie said, ‘while containers exist in the application space and provide a very nice way to package and deploy applications.’

McLuckie said that Google is already using VMs and containers together for a number of reasons, noting that VMs provide a very strong isolation boundary for security.

‘I’m not saying that containers don’t offer some benefits from a security perspective, but we really like that strong VM hypervisor boundary,’ he said. ‘VMware spent 15 years making it extremely robust.’

Ben Golub, CEO of Docker Inc., the lead commercial sponsor behind the open-source Docker project, said during the press conference that the partnership with VMware is fundamentally about choice.

‘No matter who the customer is, we want them to be able to deploy Docker in the environment that makes sense for them,’ he said.

Golub stressed that Docker has been enterprise-ready since at least its 1.0 release, which debuted on June 9. He added that people deploy Docker on bare metal, on the public cloud, in every flavor of Linux and now on every virtualization platform as well.

One of the promises of Docker is the improved efficiency over VMs, by not needing to run separate operating systems for each application. It’s a promise that still holds true when running Docker on top of VMware. Golub noted that when users start adopting containers, they don’t have the overhead of a guest operating system.

‘So instead of having a thousand applications running on a thousand different VMs, you can have a thousand different applications running in containers spread across a limited number of VMs,’ Golub said. ‘As a result, you can get the best of both worlds.’

Sep 022014
 

VMware has announced their own branded version of OpenStack, called VMware Integrated OpenStack (VIO).

VMware Announces Its Own OpenStack Distribution

eWeek – By: Sean Michael Kerner – “VMware has been one of the top contributors to the open-source OpenStack cloud platform over the last several years, and now the company is taking the next logical step by announcing its own OpenStack distribution. The new VMware Integrated OpenStack (VIO) offering, formally announced at the VMworld conference Aug. 25, provides a new option in the increasingly competitive space for OpenStack products and services. Dan Wendlandt, director of product management for OpenStack at VMware, is helping to lead the new VIO effort. Wendlandt is a well-known name in OpenStack circles, having helped get the OpenStack Neutron networking project started (originally under the name Quantum) back in 2011. In addition to its leadership in OpenStack networking, VMware has made multiple contributions to other areas of OpenStack, including compute and storage, over the years. VMware has been supportive of OpenStack for years at the highest levels of the company. In 2012, then VMware CTO Steve Herrod publicly expressed his support for VMware technologies running OpenStack. And VMware’s current CEO, Pat Gelsinger, commented in August of 2013 that he saw OpenStack as being highly complementary to VMware and that his company would be building out additional support.

Wendlandt told eWEEK that in the past, VMware had only talked about its community contributions to OpenStack but is now taking its involvement to the next level. For VMware, OpenStack is all about enabling choice for its users.

‘The goal with VIO is to make OpenStack an extremely well-integrated aspect of the VMware product portfolio,’ Wendlandt said.

What VIO is in a nutshell is the open-source OpenStack code with VMware drivers, as well as an OpenStack-specific management tool and reference architecture. Wendlandt explained that the management tool handles installation and upgrades. Additionally, VMware’s management suite, including vCenter Operations Manager and Log Insight, will gain OpenStack awareness, providing new visibility for users into OpenStack environments.

‘The key thing here to note is that OpenStack itself isn’t solving all enterprise use cases and requires additional management wrapped around it,’ Wendlandt said. ‘That’s why we’re delivering an entire package, providing customers with a single contact for support.’

VMware integration into OpenStack itself is not a new thing. Multiple existing OpenStack distributions including Piston, Suse, Ubuntu and Mirantis all support VMware technologies running on OpenStack. All existing vendor partnerships for OpenStack support on VMware will continue, Wendlandt said.

‘What the other OpenStack distribution support represents is the ability to use OpenStack as a framework to combine loosely coupled components,’ he explained. ‘VIO is about giving our customers an option for a much more tightly integrated product, a single contact for support and deep management integration to simplify installation.’

Wendlandt stressed that VIO as well as the various OpenStack distributions that also support VMware are all based on the same upstream open-source code. VMware’s goal is not to provide different OpenStack bits, but rather to deliver a specific package focused on VMware technologies.

While VMware is announcing its VIO product today, it is not yet announcing packaging options or pricing at this time.

‘What we’re announcing is the availability of a private beta, but broader pricing and packaging will not be announced until we’re ready for a General Availability [GA] release,’ Wendlandt said.”