Aug 272014
 

I REALLY wish I had these capabilities NOW! Wow! This is going to be NICE!

vMotion Enhancements for vSphere 6 Announced

WahlNetwork – By: Chris Wahl – “During the Day 2 keynote at VMworld (August 26th, 2014), Raghu Raghuram, Executive Vice President, Software-Defined Data Center Division, went ahead and announced the upcoming improvements to vMotion that are bundled in vSphere 6. Here’s a look at the details around this announcement:

Upcoming new vMotion features:

  • vMotion across vCenter Servers (VCs)
  • vMotion across virtual switches: Virtual Standard Switch (VSS), Virtual Distributed Switches (VDSs)
  • vMotion using routed vMotion networks
  • Long-distance vMotion for use cases such as:
    • Permanent migrations (often for Datacenter expansions, acquisitions, consolidations)
    • Disaster avoidance
    • SRM and disaster avoidance testing
    • Multi-site capacity utilization
    • Follow-the-sun scenarios
    • Onboarding onto vSphere-based public clouds (including VMware vCloud Air)

Crossing Data Centers

As a vSphere engineer, we’re usually limited to vMotion domains that are limited by a vCenter Server construct (or more specifically, the data center in many cases due to network configurations). vSphere 6 will allow VMs to vMotion across Datacenter and VC boundaries using a new workflow. You’ll also be able to take advantage of a workflow that lets you hop from one network (source) to another network (destination), eliminating the need to have a single vSwitch construct spanning the two locations.

The configurations targeted are:

  • from VSS to VSS
  • from VSS to VDS
  • from VDS to VDS

Routed vMotion and Increased RTT Tolerance

And while you can request a RFQ (request for qualification) to use Layer 3 for vMotion, most of us are limited (or comfortable) with Layer 2 vMotion domains. Essentially, this means one large subnet and VLAN stretched between compute nodes for migrating workloads. An upcoming feature will allow VMs to vMotion using routed vMotion networks without need for special qualification. In addition, another useful feature planned will revolve around the ability to vMotion or clone powered off VMs over NFC networks.

And finally, the latency requirements for vMotion are being increased by 10x. Enterprise Plus customers today can tolerate vMotion RTTs (round trip times) of 10 ms or less. In the new release, vMotion can withstand 100 ms of RTT.”

Aug 212014
 

This is a post, taken from the VMware Blog, about VMware’s acquisition of CloudVolumes.


VMware Acquires CloudVolumes

“Posted on August 20, 2014 by Louis Cheng
by Harry Labana, SVP and Chief Product Officer, CloudVolumes

The industry’s virtualization infrastructure leader acquired us today. At just over three years old, VMware realized the potential of CloudVolumes’ technology to disrupt status quo application delivery.

It’s a proud day at CloudVolumes today as we celebrate our acquisition by VMware and look forward to the prospect of continuing the growth of our product on a global scale with the power of the industry’s top player behind us.

I’d like to briefly tell you the CloudVolumes story and explain why our technology will change how customers think about application lifecycle management.

The CloudVolumes journey started in 2011 with some astute observations by our founders.

They realized that by virtualizing below the OS, hardware virtualization could address many problems of the pre-virtualization world such as hardware lock-in, low server utilization and data center agility. The pre-virtualization world was in effect a per-server management model. Easing the burdens of this world created a new industry.

In a post-virtualization world, many people are now using a per-VM management model. In other words, managing dedicated virtual machines for specific applications. With the explosion in growth of applications running on virtualized infrastructure, this has increased the pressure on IT management and administration. As a result, the number of virtualizable servers has dramatically increased in the last decade, generating demand for better management tools and automation. As the Mobile/Cloud era continues to evolve, there will be an even greater need to manage applications in new ways. IT must provide agile solutions that enable real-time service delivery for the move to private and hybrid clouds.

Application Management Containers

In this new world, problems emerge as the demand for real-time application delivery strains existing infrastructures. For example, how do you deliver thousands of applications to thousands of machines in mere seconds?

This can be accomplished by eliminating the need for per-VM management and virtualizing above the OS.

By virtualizing above the OS, applications can be natively provisioned without modification. Applications become abstracted from the underlying OS and organized into application management containers leveraging existing storage and networking. These applications can then be delivered to diverse environments in real time.

This approach significantly reduces the infrastructure overhead and simplifies application lifecycle management.

The Key Benefits For Customers

Once you understand the basics of the technology. It’s helpful to categorize the benefits as you think about specific use cases:

Agility

Logically manages application workloads based on business needs
Delivers or upgrades application workloads across all VMs in seconds

Simplicity

Integrates into existing infrastructure in minutes
Provisions applications as easily as installing them

Flexibility

Provides persistent user experience with non-persistent economics
Compatible with almost any hypervisor or workload, server or desktop

Efficiency

Optimizes use of storage, SAN IOPS, and network.

Seeing Is Believing!

One of the challenges the CloudVolumes team has faced is that when people first hear about the concept of virtualizing above the OS, they don’t believe it is really possible or understand what it is. They often incorrectly equate us to an existing approach or feel that there must be some voodoo under the covers.

A lot of people consider the CloudVolumes technology to be a ‘layering’ technology in the Microsoft Windows world. However, ‘layers’ are more of a concept than an agreed upon method of implementation for a technology. For Windows-server use cases, many refer to CloudVolumes technology as containers for Windows. Hence at CloudVolumes we’ve always used the more generic term of ‘virtualize above the OS,’ which enables real-time application delivery.

CloudVolumes’ technical architecture is a hybrid between layering, application virtualization and containers that enables customers to have high application compatibility while working with existing infrastructure.

To dispel a few myths about how CloudVolumes technology works, I’ll borrow an excerpt from a blog I posted prior to joining CloudVolumes when I was just beginning to realize the potential of the technology:

‘CloudVolumes achieves this by installing applications natively into storage and then capturing them as VMDK/VHD stacks outside of the OS, which can then be distributed. You may think this is just like application packaging with App-V or ThinApp but it’s not quite that. They natively store the bits as they are written during the install, in a different location, and then take note of things like services, which are started and roles, which are enabled into the OS. These are then ‘put’ onto the AppStack volume, and when complete (which can span reboots, and several apps or dependencies being installed one after the other) you tell the agent through a dialog in the provisioning VM you are done, and that VMDK/VHD is then locked as a read-only volume which can now be assigned to others.

When this read-only volume is attached to a server or desktop VM running their agent, its contents are immediately virtualized into the running OS, registry, files etc. Unlike ThinApp or App-V, it’s immediately available and seen by other applications on the system as if it was natively resident (no need to stream)—without having to do any special registry changes to see the contents of the opaque object/package within ThinApp/App-V.’

What’s also really nice is that CloudVolumes technology doesn’t need full VM control to do what we do. So this means we can dynamically attach apps without recomposing or reboots.

As the old adage says, a picture speaks a thousand words. Hopefully a video or two will help further clarify how our approach is unique.

This first video demonstrates how applications from a CloudVolumes Virtual Machine Disk VMDK container can be remotely published to Remote Desktop Session Host (RDSH) servers. This makes it easy to (real-time) add new applications to an RDSH server without the need to install apps into each RDSH server, create silos of RDSH application servers or to update the RDSH template and create new VMs.

In this second video, Microsoft Office 2013 (Powerpoint and Word) is real-time delivered via CloudVolumes technology into a VMware Horizon pooled desktop. This demonstrates the agility IT is to able move with in providing applications to users that where not available just seconds earlier.

In this third video a server example is given, with CloudVolumes dynamically delivering SQL Server 2008 and a large 1GB database containing more than 1M rows–all within a few seconds. Next, we upgrade to SQL Server 2012 and then rollback to SQL Server 2008 all within a few seconds and a few clicks!

Moving Towards Disposable IT

Our customers want an aggregate reduction in the complexity of managing applications. The overhead of application lifecycle management today is just too costly and slow.

Fundamentally customers are trying to move to a world where they can reduce complexity and cost not just by simplifying application management infrastructure, but by leveraging real-time application delivery to reduce support costs by making it easy to reset applications to a known good state and deliver rapid change.

They’d rather quickly start again with a known-good configuration rather than spend endless hours troubleshooting. As they adopt and build new applications, a static infrastructure can’t deliver the service that is required in a modern world. They need a better way to enable their users to be more productive with rapid service delivery given that continuous change will be the norm. In this world, you need to be able to spawn and (adeptly change then) re-spawn (any component in the infrastructure) or service on demand. There is no need to stay with static infrastructure that is difficult to maintain. Use want you need, then dispose of it and then start again fresh when you need to do so.

Enabling this type of capability in the infrastructure has opened up many new possibilities for customers who have found some incredibly innovative ways to build and deliver new services using CloudVolumes. Joining forces with VMware, who had the foresight to understand our vision and differentiators, will allow the CloudVolumes team to reach out to a substantially larger number of potential customers globally and collaborate to enable new possibilities.

We are very proud of what we have achieved in just over three years. I want to thank all of our customers and partners who have believed in us. VMware is an incredible engineering organization and we are honored to strengthen that team with our family. Personally I am stoked and humbled to have the opportunity to lead and serve our amazing team moving forward. I look forward to the future with excitement and sharing more of our plans at VMworld. And now, some champagne!

@harrylabana

This entry was posted in End-User Computing Overview, VMware Horizon and tagged cloudvolumes, End User Computing, VMware, VMware Horizon on August 20, 2014 by Louis Cheng.”

Aug 202014
 

Small business is not “jumping” on Cloud Computing just yet, but they should! Figuring in flexibility, cost savings, and computing power and capabilities, and Cloud Computing could be the answer for YOUR small business needs!

How the Cloud Will Transform Business by 2020

Inc. – By: Graham Winfrey – “The cloud can save you time and money, but it also has the potential to change the way you do business.

The percentage of U.S. small businesses using cloud computing is expected to more than double during the next six years, from 37 percent to nearly 80 percent, according to a study from consulting firm Emergent Research and financial software company Intuit.

While use of the cloud today is generally associated with the ability to reduce costs and improve efficiency, widespread adoption of this technology is projected to have a transformative effect on small businesses, but also on large companies and government organizations.

‘We’re seeing efficiency gains continue, but we’re also starting to see the emergence of new capabilities,’ says Steve King, a partner at Emergent. So what are these new capabilities and which companies are taking advantage of them? Here are four examples of small businesses Emergent lists as having fully adapted to the cloud:

Plug-In Players

These are businesses that plug into cloud-based service providers. San Francisco-based ZenPayroll is one example. The company automates and handles all payroll taxes, filings and forms so that small businesses never have to fill out government documents. Of the six million small businesses in the U.S. today that require payroll, 40 percent process payroll by hand, and one third of those get fined every year for incorrectly paying their payroll taxes, according to ZenPayroll chief executive officer Joshua Reeves. By ‘plugging’ into cloud-­based providers such as ZenPayroll, which processes more than $800 million in payroll annually, small businesses will be able focus on mission critical areas of business.

Hives

Hives refer to businesses that operate with employees working in different locations and companies that have increasingly flexible staff levels. ‘This used to be called the Hollywood model where you would form up a team, accomplish a task, and de-form,’ says King. ‘What we’re seeing now is that the cloud is enabling this to happen. We’re seeing many cases where people are pooling together resources, whether it’s in a co-working space or a shared workspace.

Head-to-Headers

These are small businesses that are competing with major firms for business, a growing trend in the U.S., thanks in part to the cloud, according to the Emergent report. Airbnb is one example, but the same trend is taking place with small financial advisory firms. For example, two of the top ten U.S. merger and acquisition advisory firms by deal volume are ‘kiosk’ investment banks, referring to firms with between three and five employees, according to King. ‘These are the guys that compete with large businesses, taking advantage of the capabilities that the cloud provides,’ says King. ‘They’re competing with Morgan Stanley, JPMorgan and the others.’

Portfoloists

Portfoloists are freelancers that rely on multiple income streams and use the cloud to manage these streams efficiently. Today, 30 percent of small business owners with less than 20 employees have at least one second job, according to Emergent. ‘We’re seeing more and more examples of this across the economy,’ King says.

The lesson for entrepreneurs who don’t use cloud computing is simple: In six years, 80 percent of competing businesses are expected to adopt some form of cloud computing. Those that don’t, risk falling behind.”

Aug 202014
 

As I mentioned before, Docker has become quite popular! Now, Docker has Flocker, to help with storage issues!

Flocker Adds Storage Virtualization To Docker Containers

EnterpriseTech – By: Timothy Prickett Morgan – “The Docker application container system developed by platform cloud provider dotCloud, now known as Docker Software, has been gaining momentum in recent months and is on the way to become an alternative virtualization method alongside of full-on server virtualization. Docker is far from being complete, however, and some hosting experts who know a thing or two about data management on public and private clouds are applying that expertise to help virtualize the storage that underlies Docker containers.

Docker is a means of encapsulating a Linux operating system runtime and an application stack in a software container with specific resources allocated to it. The containers allow for application stacks to be managed as a whole and for multiple application stacks to be run side-by-side on physical servers, much as you can do with a server virtualization hypervisor such as KVM from Red Hat, Xen from Citrix Systems, ESXi from VMware, or Hyper-V from Microsoft. The important thing about Docker is that it is a much thinner layer of software and therefore imposes much less of a performance penalty compared to hypervisors. The commercialized version of Docker 1.0 was announced back in June, including the Docker Engine, which provides the hardware abstraction layer, and Docker Hub, a repository for software images that run inside of containers.

While Docker does a fine job of abstracting the compute and memory resources and doling them out to containers, Docker really only covers a subset of the application stack. So, for instance, Docker is fine for Web server front ends or API servers that have access to shared storage and that are replicated in the application stack for high availability. But, says Luke Marsden, CEO at ClusterHQ, Docker cannot offer the same flexibility and portability for application components such as message queuing servers, NoSQL databases, or relational databases.

‘Docker is great for stateless applications that do not write data to a file system,’ Marsden explains to EnterpriseTech. ‘But as soon as you put an application into a container and you mount a file system, that container gets stuck on that machine.’

All of the same things that have been added to server virtualization hypervisors to virtualize storage and networking in addition to compute and memory now have to be added to the Docker stack. And ClusterHQ is taking on the storage issues first. This means virtualizing local storage for Docker containers and giving them backup and restore capability as well as failover and high availability for both the container and its associated storage. It also means virtualizing storage so containers and their associated storage can be moved around as workloads change on a cluster, and allowing for clustering and orchestration for complex multi-tier applications, treating them as a unit instead of disparate components.

ClusterHQ has some experience in this area already, which is why it is taking on the storage virtualization challenge for Docker with a product it is calling Flocker. The company spent five years developing a hosting stack called HybridCluster, which was based on the FreeBSD variant of Unix and mashed up its own container implementation, called Jails, with the open source implementation of the former Sun Microsystems’ Zettabyte File System (ZFS) to provide exactly the needed capabilities. This was all done on local storage without resorting to a shared storage area network hooked into all nodes in the cluster (which is the easy but expensive way to do it). The HybridCluster software allows for Jails to be live migrated, and because it is synchronized with ZFS, the data underpinning the Jails is moved along with the application in a container. And therefore, you can do high availability and distributed resource management on HybridCluster in a way that is not possible today with Docker. (VMware’s vSphere extensions and vCloud Suite add-ons do the same for the ESXi hypervisor, and have for years. This layer of management software is a big part of the company’s revenue stream these days, although VMware does not say how much in its financial reports.) HybridCluster can run on a private cloud in your own datacenter atop OpenStack/KVM or VMware vSphere or it can be layered on top of Amazon Web Services, Google Compute Engine, or Rackspace Cloud. The HybridCluster software was designed by hosters for hosters, as Marsden puts it, and its ideas have been battle-tested supporting thousands of applications in production. They are also the foundation of the Flocker companion to Docker that ClusterHQ has just launched.

Flocker 0.1 is not a complete product yet, but it will nonetheless be useful for early adopters of Docker technology looking for a storage virtualization layer. At the moment, Flocker requires ZFS to be used on the compute/storage nodes in the cluster. Flocker is based on the open source implementation of ZFS, and Marsden says the company is heavily involved in the OpenZFS project and contributes heavily to the Linux variant. ClusterHQ is looking at making the storage back-end for Flocker pluggable so other file systems can underpin Flocker, but at the moment Marsden says that BTRFS, the other up-and-coming advanced file system for Linux machines, does not have all of the necessary features to do what Flocker needs.

The way Flocker works is simple enough to explain. The tool creates a virtual disk volume, called a Flocker volume, that rides atop ZFS local storage. This is where the persistent data behind a database server or a NoSQL datastore node resides. Docker rides atop the Linux kernel, partitioning up the operating system and laying down containers for applications. Flocker drops a network proxy on top of that that links all of the server nodes together in the cluster, all of the containers on a single node to each other, and all of the containers to the outside world where users access the applications. This proxy is the secret sauce because it manages all of the links between containers, storage, and users as both the containers and their storage are live migrated around a cluster.

At the moment, Flocker is just a command line tool, and Marsden says it will take somewhere between 12 and 18 months to get Flocker fully developed to a 1.0 release, which will include a runtime environment and features that allow for the movement of data and containers to be done in a fully automated fashion. At the moment, Flocker 0.1 is free and open source, but at some point there will likely be an enterprise edition with extra goodies that have a subscription as well as subscriptions for supporting the open source edition.

Marsden says that Flocker could be adapted to support FreeBSD Jails in addition to the plain vanilla LXC containers that are part of the current Linux server editions from Red Hat, SUSE Linux, Canonical, Oracle, and others. But the question is whether any of these will be necessary, given the popularity of Docker. And the real question is when Docker Software just bites the bullet and acquires ClusterHQ to unite the Docker and Flocker tools. Docker has just raised $40 million, so it has the cash if it wants to.

ClusterHQ has also raised $3 million in seed money to fund the development of Flocker, so it doesn’t need to be acquired. The company is based in Bristol, England and has a dozen employees as it launched the Flocker effort. In addition to Marsden, who was a software engineer at TweetDeck (acquired by Twitter three years ago for £25 million) as well as being the CEO and co-founder at ClusterHQ, the Flocker team includes chief architect Jean-Paul Calderon, who is heavily involved in the Twisted event-driven network engine project, and chairman Mark Davis, who was a founder and CEO at storage virtualization company Virsto Software, which VMware acquired in February 2013 for an undisclosed sum.

Maybe VMware should buy Docker Software and ClusterHQ before this Docker container craze gets even crazier. At the moment, the majority of VMware’s server virtualization business is still driven by the need to partition servers and aggregate Windows Server workloads, but if Docker takes off as many expect, it could become the preferred method of virtualization for Linux. And that would eat into VMware’s business. Alternatively, Red Hat could swoop in and position the Docker/Flocker combo as an alternative to its own KVM as well as other hypervisors. Docker has a valuation in the range of around $400 million, and at five times its seed funding, ClusterHQ could probably be had for around $15 million. Call it a cool $600 million for a nascent – but important – software technology. Anything that emulates the way that Google partitions and manages its workloads, which Docker plus the open source Kubernetes container manager created by Google does, has to command attention in the market.”

Aug 052014
 

As I predicted way back when, Docker keeps becoming a bigger and bigger deal in the Virtualization world!

Who’s using Docker?

OpenSource.com – By: Ben Lloyd Parson – “I’ve spent the last couple of months working an internship for The Linux Foundation, doing research on new developments and adoption trends in the open source industry. If you have spent any amount of time reading about open source over the last year, you have probably heard about Docker; a lot of people are talking about it these days and the impact it’s going to have on virtualization and DevOps.

With new technologies like this, it can often be challenging to filter out the hype and understand the practical implications. Additionally, complex jargon often makes subjects like Linux containers confusing to the layman and limits discussion to those who are deeply knowledgeable on the subject. With this article, I will step back for a moment from the discussion of what Docker can do to focus on how it is changing the Linux landscape.

What is Docker again?

In a nutshell, Docker is an extension of Linux Containers (LXC): a unique kind of lightweight, application-centric virtualization that drastically reduces overhead and makes it easier to deploy software on servers. Solomon Hykes, the founder of Docker, explains this functionality well with his analogy of using standardized shipping containers to ship diverse goods around the globe. Docker allows systems administrators and developers to build applications that can be run on any Linux distribution or hardware in a virtualized sandbox without the need to make custom builds for different environments. These features are attracting a lot of big names and have turned Docker into one of the most successful open source projects of the last year. It seems Docker is here to stay, so what does this mean for Linux?

The many uses of Docker

Red Hat has been at the forefront of Docker adoption and development, with Paul Cormier being one of the biggest advocates for its use. The company has been working closely with Docker since September of last year, and has focused on improving the functionality of Docker on the OpenShift platform. The overall focus has been on using Docker as a tooling mechanism to improve resource management, process isolation, and security in application virtualization. These efforts have culminated with the launch of Project Atomic, a lightweight Linux host specifically tailored to run Linux containers. The focus of this project is to make containers easy to deploy, update, and roll back in an environment that requires far fewer resources than a typical Linux host.

Docker for DevOps

Another major focal point for Docker use is in the DevOps community. Docker has been designed in a way that it can be incorporated into most DevOps applications, including Puppet, Chef, Vagrant, and Ansible, or it can be used on its own to manage development environments. The primary selling point is that it simplifies many of the tasks typically done by these other applications. Specifically, Docker makes it possible to set up local development environments that are exactly like a live server, run multiple development environments from the same host that each have unique software, operating systems, and configurations, test projects on new or different servers, and allow anyone to work on the same project with the exact same settings, regardless of the local host environment. Finally, Docker can eliminate the need for a development team to have the same versions of everything installed on their local machine.

Spotify is working on incorporating Docker into their development work flow. The repeatable nature of Docker images makes it easier for them to standardize their production code and configurations. Their work has led to the creation of Helios, an application that manages Docker deployments across multiple servers and that alerts them when a server isn’t running the correct version of a container.

Docker for continuous integration

eBay has focused on incorporating Docker into their continuous integration process to standardize deployment across a distributed network of servers that run as a single cluster. They isolate application dependencies inside containers to address the issue of each server having different software versions, application dependencies, and special hardware. This means the host OS does not need to be the same as the container OS, and their end-goal is to have different hardware and software systems running as a single Mesos cluster.

Docker for the security of a sandbox

Remote Interview develops software for recruiters to test the development skills of job candidates. They released CompileBox, a Docker-based sandbox that can run untrusted code and return the output without risking the host on which the software is running. During the development of CompileBox, the team at Remote Interview considered using Chroot jails, Ideone, and traditional virtual machines, but Docker was selected as the best option. Chroot does not provide the needed level of security, Ideone can quickly become cost-prohibitive, and virtual machines take an exceptionally long time to reboot after they are compromised. Docker was the obvious choice for this application because malicious code that attempts to destroy the system would be limited to the container and containers can be created and destroyed quickly as needed.

The future of Docker

A number of companies and organizations are coming together to bring Docker to desktop applications, a feat that could have wide-ranging impacts on end-users. Microsoft is even jumping on board by bringing Docker to their Azure platform, a development that could potentially make integration of Linux applications with Microsoft products easier than ever before.

Docker 1.0 was released on June 9th, during the first day of Dockercon, and it is considered the first release of Docker stable enough for enterprise use. Along with this launch, a new partnership was announced between Docker and the companies behind libcontainer, creating a unified effort toward making libcontainers the default standard for Linux-based containers. The growth of Docker and Linux containers shows no sign of slowing, and with new businesses jumping on the bandwagon on a regular basis, I expect to see a wealth of new developments over the coming year.”

Jul 272014
 
PlayPlay

This is a special edition of Virtzine! We are looking at Userful Multiplatform Virtual Desktop Infrastructure (VDI) for small businesses. This is an amazing compilation of Open Source technologies to create a simple, easy to use, yet powerful VDI system!


(Click on the buttons below to Stream the Netcast in your “format of choice”)
Streaming M4V Video
 Download M4V
Streaming WebM Video
 Download WebM
Streaming MP3 Audio
 Download MP3
(“Right-Click” on the “Download” link under the format button
and choose “Save link as…” to save the file locally on your PC)

Subscribe to Our Video (M4V) RSS Feed
http://www.virtzine.com/category/netcast/feed

Subscribe to Our Audio (MP3) RSS Feed
http://www.virtzine.com/category/audio/feed


Also available on YouTube:
http://youtu.be/z-xBB11f-pc


Jul 272014
 

This is a special edition of Virtzine! We are looking at Userful Multiplatform Virtual Desktop Infrastructure (VDI) for small businesses. This is an amazing compilation of Open Source technologies to create a simple, easy to use, yet powerful VDI system!


(Click on the buttons below to Stream the Netcast in your “format of choice”)
Streaming M4V Video
 Download M4V
Streaming WebM Video
 Download WebM
Streaming MP3 Audio
 Download MP3
(“Right-Click” on the “Download” link under the format button
and choose “Save link as…” to save the file locally on your PC)

Subscribe to Our Video (M4V) RSS Feed
http://www.virtzine.com/category/netcast/feed

Subscribe to Our Audio (MP3) RSS Feed
http://www.virtzine.com/category/audio/feed


Also available on YouTube:
http://youtu.be/z-xBB11f-pc


Jun 232014
 
PlayPlay

Ubuntu closes Ubuntu One service, Skylabel has released an Open-Source version of the Amazon S3 service, VMware could ‘crush’ Citrix Systems in VDI, Red Hat Enterprise Linux 7 is available, VMware Horizon View Version 6 made available on June 19, 2014!


(Click on the buttons below to Stream the Netcast in your “format of choice”)
Streaming M4V Video
 Download M4V
Streaming WebM Video
 Download WebM
Streaming MP3 Audio
 Download MP3
(“Right-Click” on the “Download” link under the format button
and choose “Save link as…” to save the file locally on your PC)

Subscribe to Our Video (M4V) RSS Feed
http://www.virtzine.com/category/netcast/feed

Subscribe to Our Audio (MP3) RSS Feed
http://www.virtzine.com/category/audio/feed


Also available on YouTube:
http://youtu.be/tTRUTKkFh5s


Jun 232014
 

Ubuntu closes Ubuntu One service, Skylabel has released an Open-Source version of the Amazon S3 service, VMware could ‘crush’ Citrix Systems in VDI, Red Hat Enterprise Linux 7 is available, VMware Horizon View Version 6 made available on June 19, 2014!


(Click on the buttons below to Stream the Netcast in your “format of choice”)
Streaming M4V Video
 Download M4V
Streaming WebM Video
 Download WebM
Streaming MP3 Audio
 Download MP3
(“Right-Click” on the “Download” link under the format button
and choose “Save link as…” to save the file locally on your PC)

Subscribe to Our Video (M4V) RSS Feed
http://www.virtzine.com/category/netcast/feed

Subscribe to Our Audio (MP3) RSS Feed
http://www.virtzine.com/category/audio/feed


Also available on YouTube:
http://youtu.be/tTRUTKkFh5s