• The Virtual World

    Microsoft Steps Up, Citrix Gives Up, VMware Still King!

    Citrix is now a “niche” player in server virtualization, and VMware is still King of the Hill, but Microsft is making headway, largely because it comes with the OS.

    Gartner Names Microsoft and VMware as Server Virtualization Leaders

    Redmond Magazine – By: Keith Ward – “Gartner has released its 2015 ‘Magic Quadrant’ for x86 server virtualization infrastructure, which found that Microsoft and VMware are leading the pack in that market.

    VMware was rated highest by a significant margin, ranking at the top for both ‘Completeness of Vision’ and ‘Ability to Execute.’ Microsoft was behind in both categories, but was still far ahead of the other five companies listed as important in server virtualization: Oracle, Odin, Red Hat, Citrix and Huawei. They had a place as ‘Niche Players.’

    VMware: Still King of the Hill

    As part of its analysis, every Magic Quadrant report lists strengths and cautions for each vendor. It’s still bullish on VMware: ‘VMware continues to have dominant market share, and customers remain very satisfied with product capabilities and vendor support,’ the report states, adding that the company is still growing at a healthy clip.

    Despite those strengths, the report points out a number of challenges for VMware, and many come from Microsoft. ‘Client inquiries have been significantly increasing about comparisons between VMware and Hyper-V, specifically,’ Gartner says. In the SMB space, especially, Microsoft is becoming a threat: ‘… as Microsoft gains marketing momentum, VMware will need to continue to offer low-price packages to remain competitive in this market.’

    Another concern for VMware is its lack of traction in public cloud, which Gartner says could have large ramifications down the road: ‘While VMware has a dominant share for existing enterprise workloads, its share of the newer, cloud workloads is much smaller — a major inhibitor to growth.’

    On the whole, though, Gartner gives VMware high marks for its vision that extends virtualization from the datacenter to the cloud, strong technology and customer satisfaction.

    Microsoft: A Solid Contender

    In what should be considered a major victory for Redmond, Gartner says that Microsoft ‘… has effectively closed most of the functionality gap with VMware in terms of the x86 server virtualization infrastructure.’ That’s good news for companies whose virtualization efforts are newer or less entrenched, as it means there are more comparable options.

    The issue then becomes one of saturation, as most shops have already put their server virtualization infrastructures in place. ‘Its challenge is neither feature nor functions, but competing in a market with an entrenched competitor, VMware,’ Gartner says. Microsoft is winning a ‘good’ percentage of enterprises still implementing virtualization, the report states, but there aren’t that many out there.

    One area in which Microsoft still falls short of VMware is in its virtualization management tools, which Gartner says ‘… have some ease-of-use weaknesses.’

    On the other hand, Microsoft has an advantage it’s maintained since the early days when it made Hyper-V free: price. How much of an advantage this remains is a debatable question, but enterprises that have a large percentage of Windows workloads virtualized are the most likely to standardize on Hyper-V, since it’s free.

    Citrix: A New Direction

    Citrix used to be in the ‘Leaders’ quadrant, but has seen that position slip over the years, to where Gartner now considers it a ‘Niche Player.’ The report this year has both good news and bad news. First, Gartner believes Citrix has thrown in the towel when it comes to the leaders: ‘… it is clear Citrix is no longer investing to keep up with market leaders VMware and Microsoft — at least for traditional server virtualization in the data center,’ the report states.

    However, Gartner sees this less as a failing than a new direction, into cloud computing. From the report: ‘For cloud infrastructures, the Xen hypervisor will remain the most widely used architecture for public infrastructure as a service (IaaS) cloud providers, if for no other reason than it is used by Amazon Web Services.’ Gartner sees Citrix’s goal as to grow its CloudPlatform business.

    Since 2012, Gartner has downgraded Citrix on a consistent basis. It started as a ‘Leader,’ dropped to a ‘Visionary’ in 2013, then tumbled again into the ‘Niche Player’ category last year.

    One interesting note is that Gartner estimates that ‘About 75% of x86 server workloads are virtualized,’ but adds that virtualization technologies ‘are becoming more lightweight.’ It doesn’t specifically say so, but it would be safe to assume that containers, like Docker, is at least part of what Gartner means.”

    Published by:
  • The Virtual World

    Enterprise Computing Needs Docker

    I’ve been predicting that Docker will be big in Corporate Computing, check this out!

    Why Enterprises Need Containers and Docker

    Logicworks – By: Lindsay Van Thoen – “At DockerCon 2015 last week, it was very clear that Docker is poised to transform enterprise IT.

    While it traditionally takes years for a software innovation — and especially an open source one — to reach the enterprise, Docker is defying all the rules. Analysts expect Docker will be the norm in enterprises by 2016, less than two years after its 1.0 release.

    Why are Yelp, Goldman Sachs, and other enterprises using Docker? Because in many ways, enterprises have been unable to take full advantage of revolutions in virtualization and cloud computing without containerization.

    Why Enterprises Need Containers and Docker

    containersAt DockerCon 2015 last week, it was very clear that Docker is poised to transform enterprise IT.

    While it traditionally takes years for a software innovation — and especially an open source one — to reach the enterprise, Docker is defying all the rules. Analysts expect Docker will be the norm in enterprises by 2016, less than two years after its 1.0 release.

    Why are Yelp, Goldman Sachs, and other enterprises using Docker? Because in many ways, enterprises have been unable to take full advantage of revolutions in virtualization and cloud computing without containerization.

    Docker, Standard Containers, and the Hybrid Cloud

    If there ever was a container battle among vendors, Docker has won — and is now nearly synonymous with container technology.

    Most already understand what containers do: describe and deploy the template of a system in seconds, with all infrastructure-as-code, libraries, configs, and internal dependences in a single package, so that the Docker file can be deployed on virtually any system.

    But the leaders of the open-source project wisely understand that in order to work in enterprises, there needs to be a “standard” container that works across more traditional vendors like VMware, Cisco, and across new public cloud platforms like Amazon Web Services. At DockerCon, Docker and CoreOS announced that they were joining a Linux Foundation initiative called the Open Container Project, where everyone agrees on a standard container image format and runtime.

    This is big news for enterprises looking to adopt container technology. First, in a market that is becoming increasingly skittish about “vendor lock-in”, container vendors have removed one more hurdle to moving containers across AWS, VMware, Cisco, etc. But more importantly for many IT leaders, this container standardization makes it that much easier to move across internal clouds operated by multiple vendors or across testing and production environments.

    A survey of 745 IT professionals found that the top reason IT organizations are adopting Docker containers is to build a hybrid cloud. Despite the promises of the flexibility of hybrid clouds, it is actually quite a difficult engineering feat to build cloud bursting systems (where load is balanced across multiple environments), and there is no such thing as a “seamless” transition across clouds. Vendors that claim to facilitate this often do so by compromising feature sets or by building applications to the lowest common denominator, which often means not taking full advantage of the cost savings or scalability of public clouds.

    By building in dependencies, Docker containers all but eliminate these interoperability concerns. Apps that run well in test environments built on AWS will run exactly the same in production environments in on-premises clouds.

    Docker also announced major upgrades in networking that allow containers to communicate with each across hosts. After acquiring SocketPlane six months ago, the SocketPlane team is working to complete a set of networking APIs, it looks like Docker is hard at work making networking enterprise-grade, so that developers are guaranteed application portability throughout the application lifecycle. Read all the updates from DockerCon 2015 here.

    Reducing complexity and managing risk

    Docker does add another level of complexity when engineers are setting up the environment. On top of virtualization software, auto scaling, and all of the moving parts of automation and orchestration now in place in most enterprises, Docker may initially seem like an unnecessary layer.

    But once Docker is in place, it drastically simplifies and de-risks the deploy process. Developers have more of a chance to work on application knowing that once they deploy to a Docker file, it will run on their server. They can build their app on their laptop, deploy as a Docker file, and type in a command to deploy it to production. On AWS, using ECS with Docker takes away some of the configuration you need to complete with Docker. You can achieve workflows where Jenkins or other configuration integration tools run tests, AWS CloudFormation scales up an environment, all in minutes.

    This simplified (and shortened) deployment cycle is even more useful in complex environments, where developers often must ‘remember’ to account for varying system and infrastructure requirements during the deploy process. In other words, the deploy process happens faster with fewer errors, so developers can focus on doing their jobs. System engineers do not have to jump through the same hoops to make sure an application runs on infrastructure it was not configured for.

    Many large start-ups, like Netflix, have developed work-arounds and custom solutions to simplify and coordinate hundreds of deploys a day across multiple teams. But as enterprises are in the nascent stages of continuous delivery, Docker has come at a perfect time to eliminate the pain of complex deploys before they have to develop their own workarounds.

    Caveat: Docker in Hybrid Environments is Not ‘Easy’

    We mentioned it above, but it is important to note that setting up Docker is a specialized skill. It has even taken the senior automation engineers at Logicworks quite some time to get used to. No wonder why it was announced at DockerCon that the number of Docker-related job listings went from 2,500 to 43,000 in 2015, an increase of 1,720 percent.

    In addition, Docker works best in environments that have already developed sophisticated configuration automation practices (using Puppet or Chef), where engineers have invested time in developing templates to describe cloud resources (CloudFormation). Docker also requires that these scripts and templates change. Most enterprises will either have to hire several engineers to implement Docker or hire a managed service provider with expertise in container technology.

    On top of this, there are lingering concerns over the security of Docker in production — and rightly so. While many enterprises, like Yelp and Goldman Sachs, have used Docker in production, there are certain measures one can take to protect these assets for applications carrying sensitive data or compliance obligations.

    Docker did announce the launch of Docker Trusted Registry last week, which is a piece of software that securely stores container images. It also comes with management features and support, which meets Docker’s paid support business objectives. This announcement is specifically targeted at the enterprise market, that has traditionally been skittish of open source projects without signatures and support (e.g., Linux vs. Red Hat). AWS and other cloud platforms have already agreed to resell the technology.

    Over the next 12 months, best practices and security protocols around containers will become more standardized. And as they do, enterprises and start-ups will benefit from Docker to create IT departments that function as smoothly as container terminals.

    Logicworks is a enterprise cloud software and services specializing in enterprise hybrid and managed AWS solutions for the healthcare, financial, legal and commerce industries. Contact us to learn more about our cloud solutions using Docker.”

    Published by:
  • The Virtual World

    Virtualize Your Domain Contollers

    I, personally, don’t find this as controversial, but I have a friend at work that argues this is crazy… so, I know it is an issue worth pondering!

    Time To Let Go of Your Physical Domain Controller

    Virtualization Review – By: Rick Vanover – “There was a time when it was taboo to virtualize critical applications, but that time has long passed. I speak to many people who are 100 percent virtualized, or very near that mark, for their datacenter workloads. When I ask about those aspects not yet virtualized, one of the most common answers is ‘Active Directory’.

    I’d encourage you to think a bit about that last mile. For starters, having a consistent platform for Hyper-V or vSphere is a good idea, rather than having just one system that isn’t. Additionally, I’m convinced that there are more options with a virtualized workload. Here are some of my tips to consider when you take that scary step to virtualize a domain controller (DC):

    Always have two or more DCs. This goes without saying, but this accommodates the situation when one is offline for maintenance, such as Windows Updates or a hardware failure of the vSphere or Hyper-V host.

    Accommodate separate domains of failure. The reasoning behind having one physical domain controller is often to make it easier to pinpoint whether vSphere or Hyper-V is the problem. Consider, though: By having one DC VM on a different host, on different storage or possibly even a different site, you can address nearly any failure situation. I like to use the local storage on a designated host for one DC VM, and put the other on the SAN or NAS.

    Make sure your ‘out of band access’ works. Related to the previous point, make sure you know how to get into a host without System Center Virtual Machine Manager or vCenter Server. That means having local credentials or local root access documented and available by IP (without DNS as well) is required.

    Set the DCs to auto-start. If this extra VM is on local storage, make sure it’s set to auto-start with the local host’s configuration. This will be especially helpful in a critical outage situation such as a power outage and subsequent power restoration. Basic authentication and authorization will work.

    Don’t P2V that last domain controller — rebuild it instead. The physical to virtual (P2V) process is great, but not for DCs. Technically, there are ways to do it, especially with the manageable services that allow DC services to be stopped; but it’s not recommended.

    It’s better to build a new DC, promote it and then demote and remove the old one. Besides, this may be the best way to remove older operating systems, such as Windows Server 2003 (less than one year left!) and Windows Server 2008 in favor of newer options such as Windows Server 2012 R2 and soon-to-be Windows Server 2016.

    Today it’s easier, with plenty of guidance. The resources available from VMware and Microsoft for virtualizing DCs are very extensive, so there’s no real excuse to not make the move. Sure, if it were 2005 we’d be more cautious in our ambitions to virtualize everything, but times have changed for the better.
    Do you still hold onto a physical domain controller? If so, why? Share your logic as to why you still have it, and let’s see if there’s a reason to virtualize the last mile.”

    Published by:
  • The Virtual World

    Get Ready for Containers!

    Looks like I am not the only one saying that container tech is a big deal!

    Containers Are the Next Game Changer

    Virtualization Review – By: Sean Massey – “Earlier this week, VMware announced two new open-source projects based on container technology. Project Photon is a lightweight Linux distribution designed for running containers, and Project Lightwave’s an orchestration and management tool built on technology from Pivotal.

    This, plus Microsoft’s recent announcement of Windows Server Nano with container support, sends a clear message about the future. Containerization is here, and it, along with a raft of other technologies such as configuration management, will be coming to your datacenter or environment sooner rather than later.

    While this just marks the next stop on the evolution of the data center, it brings a huge change for many systems administrators in environments that are predominately based on Windows Server. Since the release of Windows Server 2008, Microsoft has steadily decreased the need for the GUI-based management tools with Windows Server Core, PowerShell, and various remote-management technologies. At the same time, infrastructures are getting easier to automate and manage through the use of software-defined solutions.

    The only constant in IT is change. These changes won’t happen overnight, so there’s time to get out in front of them. Many of the tools that will make up the next paradigm are open source or commercial software with a free tier, so you can run them in a lab environment or on your own time. These tools aren’t always the easiest to learn or use, but if you start learning them now, you’ll be ready when your business says they need it.

    While sysadmins won’t become irrelevant, the nature of their work will change. It will be less about managing Windows, Linux and the underlying infrastructure and more about automating, orchestrating, and managing applications. It’s time to pivot towards these areas now.”

    Published by:
  • VirtZine - Video Netcasts

    VirtZine #49 – Video – “Acquisitions and New Versions”

    Fedora 21 for the Open Source cloud, Citrix buys Sanbolic to ease desktop storage, VMware acquires Immidio Flex, the future of Virtualization, Docker competitor puts Windows Server into containers, vSphere 6 is now available, a new version of Ericom!


    (Click on the buttons below to Stream the Netcast in your “format of choice”)
    Streaming M4V Video
     Download M4V
    Streaming WebM Video
     Download WebM
    Streaming MP3 Audio
     Download MP3
    (“Right-Click” on the “Download” link under the format button
    and choose “Save link as…” to save the file locally on your PC)

    Subscribe to Our Video Feed
    http://www.virtzine.com/category/netcast/feed

    Subscribe to Our Audio Feed
    http://www.virtzine.com/category/audio/feed


    Also available on YouTube:
    http://youtu.be/WuCfF3zDABY


    Published by:
  • VirtZine - Audio Netcasts

    VirtZine #49 – Audio – “Acquisitions and New Versions”

    Fedora 21 for the Open Source cloud, Citrix buys Sanbolic to ease desktop storage, VMware acquires Immidio Flex, the future of Virtualization, Docker competitor puts Windows Server into containers, vSphere 6 is now available, a new version of Ericom!


    (Click on the buttons below to Stream the Netcast in your “format of choice”)
    Streaming M4V Video
     Download M4V
    Streaming WebM Video
     Download WebM
    Streaming MP3 Audio
     Download MP3
    (“Right-Click” on the “Download” link under the format button
    and choose “Save link as…” to save the file locally on your PC)

    Subscribe to Our Video Feed
    http://www.virtzine.com/category/netcast/feed

    Subscribe to Our Audio Feed
    http://www.virtzine.com/category/audio/feed


    Also available on YouTube:
    http://youtu.be/WuCfF3zDABY


    Published by:
  • The Virtual World

    A New Version of Ericom is Worth a Look!

    I have looked at Ericom before. I installed it and tested it, and was fairly impressed. In this article, Brian Madden, a renownd virtualization expert, says that this new version is REALLY worth considering!

    Ericom Connect v7: a ground-up rewrite of their desktop virtualization product that seems more modern than Citrix or VMware!

    BrianMadden.com – By: Brian Madden – “Yesterday Ericom released Ericom Connect v7, the latest version of their VDI & RDSH desktop and app virtualization solution (formerly known as AccessNow). Connect is a complete rewrite which was two years in the making. It focuses on enterprise scalability (supporting 100k concurrent users), ease of use (install everything in 19 minutes), and reporting based on actual useful business intelligence (thanks to their new data architecture I’ll get to in a bit).

    From a technical standpoint, Ericom did away with the traditional SQL database and config files. ‘That’s 1980s technology,’ they said, and it’s not what newly-designed modern systems are based on. Instead, Connect has a distributed grid-based architecture where they moved the entire database and business logic into memory. This grid is distributed across multiple Connect servers and can scale to millions of transactions per second. While that’s overkill right now, Ericom wanted to design something that would still make sense 10-15 years from now. (There’s still a SQL database for backup / offline storage.)

    Anyway, the grid has huge scalability, reliability, and high availability. It’s got alerts, logging, and real-time reporting. So really all the pieces for legitimate business intelligence are built right into the grid.

    Ericom Connect v7 architecture

    We already mentioned that the Connect Servers create this in-memory grid to hold all the configuration and BI information. Then on the backend behind them are all the hosts users actually connect to. There’s an Ericom Connect Host Agent running in each Windows host, and they support RDSH, VDI, and physical desktops. The host agents are pretty straightforward and talk to the grid to report session counts, health, stats, etc.

    On the front end, there’s a client service that connects the end user clients to their actual sessions. The actual connection ends up being directly between the the client and the Connect agent in the session, (possibly with a load balancer and/or an Ericom Connect Secure Gateway in between).

    On the client side, Ericom has clients for Windows, Mac, Linux, HTML5, iOS, Android, Chomebook, and BlackBerry.

    Management

    Management is done via a web-based dashboard. The new dashboard is based on widgets and looks nice and modern. (Waaaaay better than the AccessNow admin console!)

    The whole thing just feels really intuitive. You can create groups of applications, groups of servers, and groups of users that you then manage as a group. So you can associate a group of apps with AD items and apply groups to hosts.

    (Everything in the management console is scriptable. Like all good products, their own management interface uses the same scripting interfaces that are available to you, so you know you can do everything.)

    Connect also has this very cool ‘Launch Simulation’ feature which is where you can see exactly what an end user would see when they login as well as all the business logic behind the scenes that created that vide. So you say ‘Show me user bsmith’ and it will show all the apps they’d see as if they were connected, and then you can click on one and it would show what server they would connect to as well as all the specific settings for that session and where those policies came from.

    What’s missing? VM management. (And that’s a good thing!)

    Perhaps the biggest ‘feature’ of Connect is something that’s not included at all—VM, image, and provisioning management. That’s right, Ericom Connect v7 does not have features to manage the provisioning of images or to manage individual or pools of VMs. And this is a good thing!

    About a year ago I wrote Cloud platforms diminish Citrix XenDesktop/XenApp’s value. This is the opportunity for VMware. I guess I should have said it’s the opportunity for Ericom, because this is exactly what Ericom did.

    My basic premise was that in the old days when everything was physical, we needed our desktop virtualization product / server-based computing product to manage all that stuff. MetaFrame had to do server image management. Citrix Provisioning Services had to copy and boot up dozens of client instances off of a single master image.

    But in 2015 when everything is virtual, there are already plenty of other products that do this which you already have. Heck, you probably already have too many of these products and one of your problems is figuring out which ones you use! Seriously, when you need to clone a VM, is that something that you do in vSphere? Or your storage product? Or maybe your storage product via vSphere? Or SCVMM? Or…???

    So really if you already have all these ways to grow, shrink, copy, delete, clone, build, and deploy VMs and disk images, do you really want another way to do it from your desktop virtualization vendor? I would think not. (Especially since theirs would probably suck since they’re in the business of desktop virtualization—not image and VM management.) So in Ericom’s case, they spent their efforts on building this grid and their connection broker and their BI and their clients and their protocol enhancements while letting you use whatever else you want to actually manage your infrastructure.

    The bonus here is this means Ericom Connect v7 is good to go regardless of the type of infrastructure you have. Physical, virtual, hyper-converged, cloud, DaaS, AWS, RDSH, VDI… they don’t really care. Stand up a Connect Server, drop their host agent into the Windows environments you’re connecting to, and you’re done. (The host agents register themselves when they come up.)

    The bottom line

    I really, really like Ericom Connect v7. It looks and feels like a modern polished product. WAY more polished than XenDesketop / XenApp (though that’s not hard), and feels as good as VMware View 6 when you’re using it. Everything just feels pretty dead simple, and there are a lot of things around the reporting and analysis that Citrix doesn’t even have after 20 years.

    I understand that Ericom is not going to displace XenDesktop/XenApp or View, but after looking at Connect v7 it is safe to say this is something that has a place in the enterprise. Definitely worth piloting before making a Citrix or VMware desktop virtualization decision.”

    Published by:
  • The Virtual World

    VMware vSphere Version 6.0 Has Been Officially Released!

    The long awaited, and anticipated VMware vSphere Version 6.0 is out! Prepare for the new features, changes, and, of course, new certification courses!

    vSphere 6 Now Available

    Virtualization Review – By: Keith Ward – “The latest version of the VMware Inc. flagship product, vSphere 6, is available as of today. It’s the first major update of vSphere since version 5.5, which came out about three years ago.

    At the time of the vSphere 6 announcement on Feb. 2, CEO Pat Gelsinger said it was the biggest release ever. It includes more than 650 new features, many of which revolve around increased scalability. Those upgrades will make vSphere more cloud-ready, which is a major push for VMware.

    Some of the most important updates include a doubling of the amount of virtual machines (VMs) available per cluster, from 4,000 to 8,000; a similar doubling of the number of hosts per cluster, from 32 to 64; and tripling the RAM per host, from 4TB to 12TB.

    vSphere 6 also upgrades technologies for storage, high availability and disaster recovery. One feature that’s received a huge amount of attention is Virtual Volumes, or VVOLs. VVOLs, writes Taneja Group Analyst Tom Fenton, ‘completely changes the way its hypervisor consumes storage; it radically changes how storage is presented, consumed and managed by the hypervisor.’ He believes it will revolutionize virtual storage.

    VVOLs will also enable a wide range of external storage arrays to become VM-aware, according to one blog post.

    Beyond storage, live migration of VMs gets a boost with long-distance vMotion. Live VM migration in vSphere 6 can be performed across distributed switches and vCenter Servers, and over distances ‘of up to 100ms RTT,’ according to a vSphere 6 FAQ. Because of that, Taneja Group’s Fenton writes, it would be possible to move a live VM across the entire United States.

    Fault tolerance has been significantly expanded, as well, with support for workloads with up to four virtual CPUs (vCPUs) now. Previously, Fault Tolerance only supported a single vCPU. This greatly limited its use on vCenter, which requires a minimum of two vCPUs.”

    Published by:
  • The Virtual World

    VMware Sued for Linux License Issue

    I am sure VMware will respond to this, but it will be expensive to fight in court!

    VMware sued for failure to comply with Linux license

    ZDNet – By: Steven J. Vaughan-Nichols – “In 2007, top Linux contributor Christoph Hellwig accused VMware of using Linux as the basis for the VMware ESX bare-metal hypervisor, an essential part of VMware’s cloud offerings.

    Years went by and the Software Freedom Conservancy, a non-profit organization that promotes open-source software, claims to have negotiated with VMware for the company to release ESX’s code, and its successor ESXi. That way, argued the Software Freedom Conservancy, these programs would legally comply with Linux’s Gnu General Public License version 2 (GPLv2). VMware refused in 2014.

    Now, Hellweg and the Software Freedom Conservancy are suing VMware in the district court of Hamburg in Hamburg, Germany.

    The group explains that they see this as a ‘regretful but necessary next step in both Hellwig and Conservancy’s ongoing effort to convince VMware to comply properly with the terms of the GPLv2, the license of Linux and many other Open Source and Free Software included in VMware’s ESXi products.’
    What’s surprising about VMware’s stubbornness is that there’s never been much question that VMware had used Linux in ESX and ESXi. As Hellwig wrote in 2007, ‘VMware uses a badly hacked 2.4 kernel with a big binary blob hooked into it, giving a derived work of the Linux kernel that’s not legally redistributable.’

    On top of that, in 2011, the Conservancy said that VMware failed to provide nor offer any source code for the version of BusyBox, a popular embedded Linux distribution and toolkit, in ESXi. Historically, BusyBox’s developers have been aggressive about defending the GPLv2. During 2007, BusyBox successfully concluded the first US GPL-related lawsuit. The developers then followed up with victories against Verizon and other would-be GPLv2 violators.

    Despite the evidence of the code, the Conservancy stated, ‘VMware’s legal counsel finally informed Conservancy in 2014 that VMware had no intention of ceasing their distribution of proprietary-licensed works derived from Hellwig’s and other kernel developers’ copyrights, despite the terms of GPLv2.’ Therefore, the Conservancy felt it had ‘no recourse but to support Hellwig’s court action.’

    Besides the general violation of the license, the group continued, ‘Conservancy and Hellwig specifically assert that VMware has combined copyrighted Linux code, licensed under GPLv2, with their own proprietary code called ‘vmkernel’ and distributed the entire combined work without providing nor offering complete, corresponding source code for that combined work under terms of the GPLv2.’

    Both Hellwig and the Conservancy state in a FAQ on the lawsuit that ‘Simply put, Conservancy and Christoph fully exhausted every possible non-litigation strategy and tactic to convince VMware to do the right thing before filing this litigation.’

    Commenting generally on the issue of GPL enforcement, Bradley M. Kuhn, the Conservancy’s President and Distinguished Technologist, said in a statement that ‘The prevalence and sheer volume of GPL violations has increased by many orders of magnitude in the nearly two decades that I have worked on enforcement of the GPL. We must make a stand to show that individual developers and software freedom enthusiasts wish to uphold copyleft as a good strategy to achieve more access to source code and the right to modify, improve and share that source code. I ask that everyone support Conservancy in this action’

    The Free Software Foundation (FSF), an organization that defends the legal rights of open-source software developers and users , also supports Hellwig. The FSF Executive Director, John Sullivan said, ‘I know that they (the Conservancy) have been completely reasonable in their expectations with VMware and have taken all appropriate steps to address this failure before resorting to the courts. Their motivation is to stand up for the rights of computer users and developers worldwide, the very same rights VMware has enjoyed as a distributor of GPL-covered software. The point of the GPL is that nobody can claim those rights and then kick away the ladder to prevent others from also receiving them. We hope VMware will step up and do the right thing.’

    VMware’s Director of Corporate Public Relations, Michael Thacker, replied, ‘We believe the lawsuit is without merit. VMware embraces, participates in, and is committed to the open-source community. We believe we will prevail on all issues through the judicial process in Germany.'”

    Published by:
  • The Virtual World

    Docker Competitor for Windows Server

    A Docker competitor? Sounds interesting. I have always said that this was the future!

    Docker Competitor Puts Windows Server Into Containers

    Virtualization Review – By: Jeffrey Schwartz – “Containers are a hot technology, and the one currently burning brightest is Docker; it’s the default starting point for admins considering using containers in their enterprise. But as with any new tech with a lot of buzz, contenders spring up almost immediately. That’s happening in the Windows world.

    A little-known startup that offers data protection and SQL Server migration tools today released what it calls the first native container management platform for Windows Server, and claims it can move workloads between virtual machines (VMs) and cloud architectures. DH2i’s DX Enterprise encapsulates Windows Server application instances into containers, removing the association between the apps, data and the host operating systems connected to physical servers.

    The Fort Collins, Colo.-based company’s software is a lightweight 8.5 MB server installation that offers a native alternative to Linux-based Docker containers. At the same time, Microsoft and Docker are working on porting their containers to Windows, as announced last fall. In addition to its relationship with Microsoft, Docker has forged ties with all major infrastructure and cloud providers including Google, VMware and IBM. Docker and Microsoft are jointly developing a container technology that will work on the next version of Windows Server.

    In his TechDays briefing last week, Microsoft Distinguished Engineer Jeffrey Snover confirmed that the company will include support for Docker containers in the next Windows Server release, known as Windows vNext.

    DH2i president and CEO Don Boxley explained why he believes DX Enterprise is a better alternative to Docker, pointing to that fact that it’s purely Windows Server-based.

    ‘When you look at a Docker container and what they’re talking about with Windows containerization, those are services that they’re looking at then putting some isolation kind of activities in the future,’ Boxley said. ‘It’s a really important point that Docker’s containers are two containerized applications. Yet there are still going to be a huge amount of traditional applications simultaneously. We’ll be able to put any of those application containers inside of our virtual host and have stop-start ordering or any coordination that needs to happen between the old type of applications and the new and/or just be able to manage them in the exact same way. It forces them to be highly available and extends now to a containerized application.’

    The company’s containers, called ‘Vhosts,’ each have their own logical host name, associated IP addresses and portable native NTFS volumes. The Vhost’s metadata assigns container workload management, while directing the managed app to launch and run locally, according to the company. Each Vhost shares one Windows Server operating system instance, which are stacked on either virtual or physical servers. This results in a more consolidated way of managing application workloads and enabling instance portability, Boxley explained.

    Unlike Docker, there are ‘no companion virtual machines running Linux, or anything like that at all,’ Boxer said. ‘It’s just a native Windows application; you load it onto your server and you can start containerizing things right away. And again, because of that universality of our container technology, we don’t care whether or not the server is physical, virtual or running in the cloud. As long as it’s running Windows Server OS, you’re good to go. You can containerize applications in Azure and in Rackspace and Amazon, and if the replication data pipe is right, you can move those workloads around transparently.’ At the same time, Boxley said it will work with Docker containers in the future.

    Boxley said a customer can also transparently move workloads between any VM platform, including VMware, Hyper-V and Xen. ‘It really doesn’t matter because we’re moving the applications, not the machine or the OS,’ he said. Through its management console, it automates resource issues, including contention among containers. The management component also provides alerts and ensures applications are meeting SLAs.

    Asked why it chose Windows Server to develop DX Enterprise, Boxley said he believes it will remain the dominant environment for virtual applications. ‘We don’t think — we know it’s going to grow,’ he said. IDC analyst Al Gillen said that’s partly true, though Linux servers will grow in physical environments. Though he hasn’t tested DX Enterprise, Gillen said the demo looked promising. ‘For customers that have an application that they have to move and they don’t have the ability to port it, this is actually a viable solution for them,’ Gillen said.

    Boxley said the solution is also a viable option for organizations looking to migrate applications from Windows Server 2003, which Microsoft will no longer support as of July 14, to a newer environment. The software is priced at $1,500 per server core (if running on a VM, it can be licensed via the underlying core), regardless of the number of CPUs. Support, including patches, costs $360 per core per year.

    Boxley said the company is self-funded, and started out as a Microsoft BizSpark partner.”

    Published by: