At Citrix we've been obviously been thinking a lot about the cloud. Not just the cloud today, but the cloud for tomorrow and well into the future.When looking to the future of technology, it's helpful to take a look at existing technologies and see if patterns emerge - and if lessons from today's ecosystem can be applied to tomorrow and beyond. To that end, we think that there's no better model for open cloud to study than Linux and its ecosystem. Not just the kernel, of course, though the kernel holds many fine lessons for any student of open development and community practices. But also, the larger ecosystem of open source and vendors (notably distributions) that have formed around the kernel.
One thing we’ve learned from Linux vendors – there’s plenty of room in the market for open solutions. From community distributions like Debian and Fedora to Red Hat, SUSE, and Ubuntu – to PostgreSQL, MySQL or Apache HTTPD and Nginx. Multiple solutions can and do co-exist, and even cooperate and compete simultaneously. There’s no reason that the cloud need be any different.
Concept here is that the successful projects focus not only on thedeveloper, but also the customers. Linux solved for lower cost unix,optimization of x86 hardware, faster "edge" servers - Linus didn't justbuild it because it was "cool" and Red Hat grew their business by buildingon features that were derived directly from customers needs. At Citrix,we do take a bunch of grief for our complexity because of the options orfeatures, but everyone is tied back to a customer. Equally, the Apachecommunity has encouraged those customers to not only take part in thecommunity but they have an opportunity to lead and influence - a powerfuldriver towards the success of a project.
Vendors and admins have had to learn the hard way that manual software management - whether it's admins compiling from source or vendors supplying software that needs to be tended separately from the system's package management - does not scale. Running proprietary installers or configuring packages from source does not allow easy updates or quick deployments to multiple machines. Likewise, at cloud scale, best practices demand that admins embrace configuration management tools like Puppet or Chef to get the most out of their environments. Configuring templates and virtual machines manually locks developers into slow, error-prone processes that do not scale to environments that encompass two or more hypervisors and thousands of guests.
Being first to market is no guarantee of long-term success, or even survival. Technology favorites can be abandoned with amazing swiftness when better technology emerges or forked when political problems make a project problematic.Xfree86 was forked and replaced by a stronger community behind x.org that iterated more quickly and showed itself willing to work in a more community friendly fashion. The kernel community not only ditched proprietary version control when it moved away from BitKeeper – Git has taken the open source community by storm.
It doesn't matter if you work for Red Hat, IBM, Citrix, SUSE, or the Picayune Hosting Company - a developer's standing in the community is based on their contributions and reputation alone. With well-run projects, like the Linux kernel or Debian or Postgres, it is individual developers that drive the projects. Governance is designed to ensure that the health of the project comes before the interests of any given company.Which, incidentally, is one of the chief reasons that we chose Apache. Apache provides a well-understood and well-tested governance model, well-understood licensing, and an umbrella that gives individual and corporate contributors the confidence that they will be on equal footing when participating in CloudStack development. Citrix employees have to earn their way in the community just as much as any other contributor - which is exactly how projects should be governed.
Another lesson that we've observed from the last 21 years of Linux development is that work needs to be done in the open. When companies or individuals hold back their changes - either for competitive purposes or to get things "just right" before submitting to public scrutiny - the community is poorer for it. Often times it means that there's technical debt to be paid in merging code into the mainline projects. We've observed this time and time again, most recently with all the headaches that Linux kernel folks experienced with the various ARM trees. We believe that open source means more than dropping code at random intervals - the work needs to be done in the open as well, so that we can benefit from the contributions of the entire community rather than those behind the corporate firewall.
Once upon a time, Linux was "exciting" in the sense that the kernel and distributions were constantly adding big new features that helped Linux become competitive with proprietary Unix and/or Microsoft Windows in the enterprise and consumer market. While Linux still adds features at an amazing pace, sometime in the mid-2000s, Linux became boring. And that was great. It meant that Linux was mainstream, quietly doing its work in the background without too much hassle. Linux conquered the data center. It conquered the top 500 list of supercomputers. Linux has become the core of the most-used smartphone operating system in the world. We aspire to have an open cloud that is just as boring - and necessary - as Linux.rather than those behind the corporate firewall.
You've heard the saying, "the perfect is the enemy of the good," and that especially applies to technology. We watched as projects like GNU/Hurd floundered and never quite delivered a viable operating system, while the Linux community continually shipped and got code into the hands of users and organizations that needed a robust product *now*. The Linux community's approach has allowed less elegant solutions - like ipchains and various schedulers, the original sys-v init system, and more - to be phased out and replaced with better and more robust technology. At the same time, we've learned that you have to avoid saddling yourself with so much technical debt that it's impossible to iterate and improve.
Concept here is a lead up to the close. Many of the companies, projectsand individuals in the market are in a race to the finish, staking claimto the market and celebrating victory. It is important to see where we arein this market. 100 Clouds, 1,000 Clouds, 10,000 clouds? We are justscratching the surface of where this technology is headed and where we areat this moment in time. Only a small percentage of the total market haseven started to understand the technology, the market and what it meansfor them. In Linux chronology, we might be no further along than when thefirst of the distributions of linux started in 1994. We have a long wayto go in our journey to the cloud.
So what does this tell us about the future of the cloud and where we'll be in 2032? We know that there's a lot we don't know. 20 years ago, we didn't imagine that Linus' baby would be all grown up and powering huge swaths of the Internet. We didn't see smartphones more powerful than the entire computing power of NASA's moon missions. It's hard to imagine what systems in ten years will look like, much less twenty.Today, the open cloud is run on commodity x86 systems running on top of open source operating systems and hypervisors. Tomorrow? We might see a lot more ARM in the data center, as is being developed by companies like Calxeda. The cloud may be used to manage high-density ARM machines with each core dedicated to its own bare-metal host, rather than stacking multiple guests on multi-core x86-64 systems. We do know, though, that the future is open and the path to tomorrow lies with open cloud and not proprietary systems. Our customers and community have spoken loudly in favor of systems that they can not only manage easily, but that they can study and contribute to. Customers have learned over and over again that closed systems make for poor infrastructure. The future of the cloud in 20 years? It's totally open.