Cloud Foundry Blog

About Rachel Reinitz

IBM WebSphere Liberty Buildpack Contributed to Cloud Foundry

IBM is contributing the IBM WebSphere Liberty Buildpack today as further proof of its commitment to make the Cloud Foundry open source project and community even stronger.

A guest blog by Rachel Reinitz, an IBM Distinguished Engineer in IBM Software Services

In late July, I wrote a guest blog here about our development of a preview IBM WebSphere Liberty buildpack. I’m delighted to announce we have contributed the WebSphere Liberty Buildpack to the Cloud Foundry community. It now has its own Cloud Foundry GitHub repo, ibm-webphere-liberty-buildpack.

Importance of Buildpacks

Buildpacks can provide a complete runtime environment for a specific class of applications. They are key to providing portability across clouds and contributing to an open cloud architecture. We want to see the community and capabilities of buildpacks expand. The WebSphere Liberty Buildpack is our first contribution to the community … more will be coming.

We are working with Pivotal on enhancements to Cloud Foundry for buildpack management. Some of the features we are excited about are: easily adding a buildpack onto an existing Cloud Foundry install, insertion of binaries directly into the buildpack cache, visibility and sequencing of buildpacks, and support for commercial buildpacks. Michael Fraenkel, lead developer on the Liberty buildpack has submitted a related pull request.

So What’s in the Liberty Buildpack?

Ben Hale recently wrote a terrific blog on the Java buildpack. We followed the same design principles, starting from a fork of the Java buildpack code. The Liberty buildpack provides IBM’s WebSphere Liberty container for web and OSGi applications with Java EE Web Profile capabilities, supports popular frameworks like Spring and includes IBM’s JRE. WebSphere Liberty enables rapid application development well suited to the cloud.

The Liberty buildpack supports multiple applications deployed into a single Liberty server. As part of the Liberty buildpack integration into Cloud Foundry, the buildpack ensures environment variables for binding services are exposed as configuration variables in the Liberty server.1

Why WebSphere Liberty is a Great Java Container for Cloud

Back in 2011, we took the power of WebSphere Application Server, redesigned the server to be small, lightweight, composable, with simple configuration, and included the WebSphere Liberty Profile as part of our WebSphere Application Server V8.5 product. Our primary use cases were providing a simplified development environment and targeting cloud deployments.

The philosophy and design of Cloud Foundry is basically the same as WebSphere Liberty’s – simplify developers’ lives by requiring minimal configuration and only loading into the server what is needed for a running application. A modest web app running in the Liberty server starts in 2 seconds.

WebSphere Liberty provides runtime capabilities through individually configurable features for Java EE and OSGi technologies like servlet, JSF, JPA, local EJBs, JAX-RS, OSGi web bundles and more. You configure only those features you need in the Liberty server.xml file.2

The Cloud Foundry buildpack architecture allows WebSphere Liberty applications (new or existing) to be run on any Cloud Foundry V2 based public, private, or even hybrid cloud.

You can learn more about WebSphere Liberty, such as this article on how WebSphere Liberty compares with Tomcat, at our community site, WASdev.

Options for Using the Buildpack

The Liberty buildpack is Apache 2.0 licensed as part of Cloud Foundry. The buildpack uses IBM WebSphere Liberty for its runtime, which has IBM commercial licenses for development or production use.

We want to make it very easy to install and use the Liberty buildpack, but we face two challenges. Because of the licensing, no public URLs for the Liberty and IBM JRE binaries exist that we can put into the buildpack. Also, Cloud Foundry today doesn’t make it easy to add a commercial buildpack, but I’m confident buildpack installation will improve rapidly. See the Liberty buildpack documentation/video for details on setup.

So there are currently 3 options for using the buildpack:

  1. Using cf push --buildpack.
  2. Add the Liberty buildpack as a default buildpack into your Cloud Foundry instance.
  3. Look for (and ask for) Liberty buildpack to be provided as a default buildpack by the public Cloud Foundry you use. It currently isn’t available on any public hosted instances today, but that will change, stay tuned.

Support for the buildpack is available via the WASDev forum or vcap-dev mailing list. Our buildpack developers are standing by to answer your questions. Contact me if you would like direct support.

Next Steps

One focus area for us is to work with clients on use of the Liberty buildpack for not only new, but also existing applications. A key design point for WebSphere Liberty is to ease migration of WAS applications to cloud environments. Liberty supports the most widely used subset of Java EE features, but not all, so ease of migration will vary. Many stateless web applications will be able to run unmodified in the cloud. Applications that use features such as session state and that access local files may need to be modified so that they can be scaled horizontally by Cloud Foundry.

We are developing services which ease migration. Our first is an elastic cache service based on WebSphere eXtreme Scale that provides session management for Liberty applications without requiring application changes. We have been busy migrating some of our own applications and services to use the WebSphere Liberty buildpack in our BlueMix environment, and we will certainly share our experiences in a future blog post.

We have a set of features we are investigating to enhance the buildpack to further simplify Java developers’ lives, particularly what developers need to do to connect to and consume services. We welcome the community’s input as to which of these features are most important to you.

  1. Simplifying the setup of buildpacks – short and long term options.
  2. Improving the auto-configuration of Liberty when pushing web applications.
  3. Auto-reconfiguration of Java EE references to cloud resources.3
  4. Minimizing the size of the droplet by packaging only the subset of Liberty features used by each individual application.
  5. Add capability in the Liberty Eclipse tools for remote development and testing of applications pushed to Cloud Foundry without having to leave the toolset.
  6. Support IBM’s WebSphere Liberty Log Analysis Service (not yet available but being demoed at Platform) to automatically redirect access logs, application logs, and Liberty messages to the Log Analysis Service.
  7. Support in the buildpack for additional IBM, 3rd party, and open source services and frameworks leveraging the existing Liberty community assets

We are very excited to be contributing our first IBM project to Cloud Foundry. We are working with Pivotal and the community to make Cloud Foundry even stronger. I want to emphasize that we welcome pull requests on the buildpack. We will also be making our tracker project public soon.

Giving Credit Where It Is Due

Confession time, now that you have read this far – I am not a Liberty specialist. My role in IBM is to get clients started using IBM BlueMix, powered by Cloud Foundry, and drive product direction based on client feedback. Clients have been asking Pivotal and us for WebSphere Liberty Support, so I led a team to make it happen. Now our Cloud Foundry community and Liberty development teams will be taking over and I’ll be focusing on the next framework, buildpack, or service our clients need.

So I’m the scribe. Ben Smith, Michael Fraenkel, and Matt Sykes developed the buildpack. Getting the buildpack contributed and keeping me honest on the technical details in this blog was done by them and Ravi Gopalakrishan, Adam Gunther, Brian De Pradine and Ian Robinson. And providing critical support was Chris Ferris, IBM’s Cloud Foundry community leader and Jerry Cuomo, WebSphere CTO, who makes work like this happen. Also thanks to Ben Hale, Ryan Morgan, James Bayer, and James Watters at Pivotal for terrific collaboration.

Look for information coming out of Platform this week and come to the next one!

  1. The buildpack will convert VCAP_SERVICES and VCAP_APPLICATION into configuration variables for the Liberty server. The variables end up in runtime-vars.xml, and are therefore 
referenceable from a pushed server.xml, for example ${vcap_app_port}: The port where the app server is listening (usually the same as ${port}). 

  2. An example of dynamic composability is if you write a servlet and JSP web app, deploy it and later wish to update the app to start using another JEE library, say JAX-RS. Now you just add JAX-RS as one line in the server.xml file and the JAX-RS libraries in the Liberty binary are loaded into memory. 

  3. Unless the user opts-out, we’ll perform the acquisition and configuration of drivers needed to access services that have been bound to the application. For example, if an application is bound to a database supported by the buildpack, we will ensure that a DataSource for the service will be bound into JNDI 

Facebook Twitter Linkedin Digg Delicious Reddit Stumbleupon Email

Combining Voice with Velocity thru the Cloud Foundry Community Advisory Board

Cloud Foundry is taking a unique approach to community that combines inclusiveness and scale with high velocity development. In this post Christopher Ferris, IBM Distinguished Engineer, shares some thoughts on how to bring this approach to life and invites the community to co-invent the Cloud Foundry Community Advisory Board.

A guest post by Christopher Ferris, an IBM Distinguished Engineer and CTO for Cloud Interoperability in IBM Software Group’s Standards and Open Source organization.

I’m really pleased with the steadily growing interest in Cloud Foundry over the past few months.

When IBM and Pivotal announced our collaboration around Cloud Foundry, including the Platform Conference and the formation of a Community Advisory Board (CAB), we envisaged the CAB as the voice of the Cloud Foundry community of developers, users and ISVs to: 1) channel feedback on a number of subjects relevant to the Cloud Foundry project and its community and 2) make recommendations that seek to improve the governance and processes of the Cloud Foundry open source project. The challenge is to distill the many voices in the community into coherent, actionable and reasonable recommendations that are designed to drive further improvement without disrupting the velocity of development of the platform.

Following the announcement, many members of the community reached out to us, expressing interest in participating in the CAB, wanting to know more about what it will do and how we intend it to function. So James Watters, Adrian Colyer and I wanted to share our thinking.


The CAB’s mission is to foster a healthy, vibrant, collaborative and innovative community and ecosystem around the Cloud Foundry platform and open source project.

This scope includes:

  • Feedback on the Cloud Foundry roadmap, along with feature advocacy.
  • Advice on the day-to-day operation of the Cloud Foundry project. For example: how to manage pull requests, select committers, track issues, manage CI, and interact with the community at large (through hangouts, IRC, email forums, etc.).
  • Feedback on the Cloud Foundry community web site ( and mechanisms for community website contributions.
  • Help in establishing agenda and format for future Platform conferences.
  • Guidance on the Cloud Foundry charter, including project scope and definition of any ‘cloud profiles’ (for example, Cloud Foundry Core).


We will bootstrap the CAB through a combination of invitation, community nominations and expressions of interest. We are looking for 8-10 initial members to help us refine the model in the first phase. IBM, Savvis, and Piston are already on board. If you are interested in joining, or have recommendations for who you’d like to see involved, please feel free to reach out to us. By the next Platform conference (in six months time), we aim to have a more formal process for the community to choose their own advocates.


The CAB will initially meet monthly, and then as frequently as the CAB feels necessary. CAB meetings may be conducted virtually. In addition, face-to-face meetings will happen at the semi-annual Platform conference with conference attendees invited to participate. The CAB may also solicit feedback from the community in-between Platform conferences using online Hangouts or web conferences.

We expect the CAB to operate with maximum transparency. Meeting minutes should be made public in a timely manner, ideally within a week. If a mailing list is established to facilitate communications amongst the CAB membership, that mailing list will be visible to the public.

With all that said, we are also hoping that one of the unconference breakouts at the Platform Conference can be dedicated to further discussion about the community processes and the role of the CAB with its initial members.

Speaking of the Platform Conference, it is really shaping up nicely! We’ve got some great talks planned, and we have more than 500 registrations when last I checked! It is amazing how much interest there is, and I am really looking forward to some great discussions about how we can grow and improve the Cloud Foundry community and platform.

About the Author: Chris currently works as an IBM Distinguished Engineer and CTO for Cloud Interoperability in IBM Software Group’s Standards and Open Source organization. He has been involved in the architecture, design, and engineering of distributed systems for most of his 33+ year career in IT and has been actively engaged in open standards and open source development since 1999. He currently provides technical leadership for IBM’s engagements in OpenStack and Cloud Foundry, as well as for IBM’s participation in open standards activities relevant to Cloud. He enjoys tennis, both as a spectator and as an avid club player.

Facebook Twitter Linkedin Digg Delicious Reddit Stumbleupon Email

A Taste of Platform: 5 Sessions from a Thriving Community

Starting Sunday, September 8th, Platform: the Cloud Foundry Conference welcomes a worldwide group of over 400 dedicated developers and operators to Santa Clara for the first conference around the open source Platform as a Service (PaaS). The conference features leaders, contributors, and users who will cover tech topics, provide roadmaps, and share case studies in an environment designed for interaction and discussion. There is also a dedicated un-conference where attendees attend discussion topics they voted on.

Besides hosts GE, IBM, and Pivotal, companies like VMware, AppFog/Savvis, NTT, Tier 3, ThingWorx, Wipro, Anchora, Rakuten, Cloud Elements, Intel, AnyNines, and others will also be speaking.

Here is a list of five sessions from PlatformCF to give you a taste of what the conference will cover and what you will learn from other customers, users, service providers, developers, and partners:

1. From Zero to Factory: Revolutionizing ‘Time-to-Value’ Thru the Industrialization of Enterprise IT

Who: Jonathan Murray
Role: EVP and Chief Technology Office
Company: Warner Music Group
Bio: Mr. Murray is responsible for global technology strategy, end-to-end IT Service delivery and the design and delivery of a next generation—cloud based—service platform to meet the needs of Warner’s employees, artists, fans, and business partners.
Abstract: ‘Time-to-Value’ is now the career defining metric for enterprise CIOs. The velocity of IT capability delivery must match the rapid pace at which modern business cycles and consumer demand patterns change. Jonathan Murray will discuss a radical approach to the industrialization of enterprise IT and the practical aspects of implementing an enterprise software factory leveraging new cloud based Platform as a Service technologies.

2. NTT’s Cloudn PaaS: Powered by Cloud Foundry

Who: Yudai Iwasaki
Role: Lead Engineer
Company: NTT
Bio: Yudai Iwasaki is a research engineer at NTT Laboratory Software Innovation Center. He is a core member of the development team of Cloudn PaaS, which is a public PaaS solution provided by NTT Communications. Mr. Iwasaki is leading the development of their Cloud Foundry deployment. He is also the leader of community relationships at NTT and is a member of the Japan Cloud Foundry Group, in which capacity he gives lectures on the structure of Cloud Foundry for Japanese Cloud Foundry developers. He is also a developer of Nise BOSH, which is a light weight BOSH emulator.
Abstract: NTT has been running a commercial public PaaS, named Cloudn PaaS, based on Cloud Foundry since April 2013. This session will show how NTT developed their service using Cloud Foundry and the benefits from adopting it as the base system. This session will also show how Cloud Foundry’s extensible design helps NTT connect other functionality, such as a managed DBMS service and authentication system, and how we manage our Cloud Foundry cluster.

3. Java Buildpack: Designed for Extension

Who: Ben Hale
Role: Cloud Foundry Java Experience Engineer
Company: Pivotal
Bio: Ben Hale leads Pivotal’s efforts to constantly improve the Java experience on Cloud Foundry. Recently he has been working on the Cloud Foundry Java Buildpack with an eye to making it the best place to run Java applications, in the Cloud or otherwise. Prior to working on Cloud Foundry, Ben worked on large-scale middleware management tools, tc Server, dm Server (OSGi), and Spring.
Abstract: Whether it is the automatic choice of containers or the zero-touch configuration and usage of services, the Java Buildpack aims to remove the hassle of running Java applications on Cloud Foundry. There are times, however, when an application needs a feature that the buildpack does not yet provide. This talk will start by showing how to use and configure the Java buildpack and finish by showing how to extend the buildpack to ensure that Cloud Foundry is the best place to run your application.

4. Cloud Foundry at Rakuten—the Largest E-commerce Company in Japan

Who: Yohei Sasaki
Role: Software Engineer
Company: Rakuten, Inc.
Bio: Rakuten Group is one of the world’s leading Internet service companies, providing a variety of consumer and business-focused services including e-commerce, eBooks & eReading, travel, banking, and so on. Mr. Sasaki has worked on the development of e-commerce platforms for the company and is now a technical lead of DevOps for Rakuten’s Platform as a Service. He is also one of a member of CloudFoundry Japan community.
Abstract: Rakuten has managed Cloud Foundry v1 clusters for over a year in production systems. In this talk, Mr. Sasaki will share how we have improved Cloud Foundry within our private infrastructure and introduce practices for managing platform as a service (PaaS) in a large scale internet companies.

5. IBM Operational Metrics

Who: Daniel Krook
Role: Senior Certified IT Specialist on the Advanced Cloud Solutions
Team Company: IBM
Bio: Daniel has been a developer and software engineer for almost 15 years and with IBM for over a decade. He holds numerous patents and has authored and contributed to many publications. Currently, he manages the OpenStack environment that runs IBM’s internal Cloud Foundry deployments.
Abstract: This talk covers the key operational metrics IBM monitors to prevent problems and keep Cloud Foundry running smoothly. Sharing our notes with the community, we hope to validate our approach, learn about other data points, and spark efforts to include additional monitoring hooks in future releases.

Visit the Platform conference website for additional information on the agenda, speakers, topics, etc.

Facebook Twitter Linkedin Digg Delicious Reddit Stumbleupon Email

Using OpenStack to Take Cloud Foundry into High Gear

In this guest post, Piston Cloud CTO and co-founder Josh McKenty shares why Piston’s OpenStack community infrastructure is a catalyst for growing and expanding the Cloud Foundry community.

By now, most of you have noticed Piston’s growing involvement with Cloud Foundry, and our maturing partnership with Pivotal. It started with our collaboration to support running Cloud Foundry on top of OpenStack, and has matured through a number of joint customers, and more recently, a deepening engagement in Cloud Foundry development itself. So today’s announcement that we have agreed to provide community infrastructure for the CF development ecosystem, should come as no surprise.

Last week, we rapidly deployed a Piston OpenStack environment for the Cloud Foundry community. This IaaS environment will allow us to provide the key services that every emerging open source ecosystem needs: continuous integration, code review, and a running reference environment.

We’ve started with a relatively small cluster – about 120 vCPUs, 320GB of RAM and 8TB of highly reliable shared storage. (One of the key features in Piston OpenStack is the fact that we can scale this up later without any service interruptions). We have an ambitious but simple goal – to keep up with and continue to support the growing Cloud Foundry ecosystem.

Our engagement with Pivotal on Cloud Foundry arises from two factors: firstly, our customers have been asking for a tighter integration between Cloud Foundry and OpenStack. But perhaps even more exciting, is the opportunity to help Cloud Foundry take the vision of open source governance that we’ve been engaged in for years with OpenStack, and crank it into 5th gear.

Piston is one of the first OpenStack companies, and we’ve been big advocates of the OpenStack Foundation (which we helped to establish) and its community governance process. As Andy Shafer pointed out, there are many ways to organize any developer ecosystem, from the benevolent-dictator model popular in Linux, through the first-among-equals model of the Apache foundation, to the “motivated stewardship” approach that Cloud Foundry has taken. There really is no single right answer. One thing we can wholeheartedly agree upon, is that what matters is the alignment between the culture of the community, and the process it uses to organize itself.

With OpenStack, we have proven that we can scale a community. With Cloud Foundry, we’re now focused on seeing if we can corner at 160MPH!

The idea of “cloud computing” – from IaaS through PaaS – is really just about providing the computing resources to keep up with the fast-paced DevOps and Agile lifecycle. Both Pivotal and Piston team members were early pioneers in Agile. At NASA, my team (the NASA Nebula project, precursor to OpenStack) was one of the first agile teams in the Agency, possibly in the federal government. And Pivotal has famously helped companies from Google-size to the proverbial “two folks in a garage”, embrace and excel with agile methodologies.

Pivotal are the right partner for this adventure. They’re committed to open source, to agile, and to achieving ridiculous velocity. They have more full time engineers dedicated solely to shepherding pull requests than many open source projects have in their entirety. But most importantly, they’re committed to the evolution of stewardship – to matching the community processes to its emerging culture. And this starts with creating an open community infrastructure.

We’re extremely proud to be providing infrastructure to the emerging Cloud Foundry community, to power its own evolution. If you’ve got ideas – for Cloud Foundry, or the CF-infrastructure – then swing by the Piston booth at the Cloud Foundry Platform conference, September 8-9 2013 in Santa Clara, CA. Or give us a shout on twitter.

About the Author Prior to co-founding Piston Cloud Computing, Joshua McKenty was the Architect of NASA’s Nebula Cloud Computing Platform and the OpenStack compute components. As a board member of the OpenStack Foundation, Joshua plays an instrumental role in the OpenStack community. He led the development of the Cloud Foundry CPI for OpenStack.

Facebook Twitter Linkedin Digg Delicious Reddit Stumbleupon Email

IBM WebSphere Liberty Buildpack on Cloud Foundry

IBM has just announced it is joining the Cloud Foundry project and making it a component of their open cloud architecture. Pivotal and IBM has jointly announced a series of actions to further engage the community in Cloud Foundry.

A guest blog by Rachel Reinitz, an IBM Distinguished Engineer in IBM Software Services

As one of the IBMers who have been engaging with Pivotal, I’ve really enjoyed the collaboration and certainly learned a lot. So, I’d like to tell you about the IBM/Pivotal collaboration around developing an IBM Java and Liberty buildpack.

Buildpacks offer Great Potential

A key part of the Cloud Foundry architecture is the use of buildpacks to specify and compose runtime environments for a class of applications. Cloud Foundry has adopted buildpacks from Heroku. We want to collaborate with Pivotal, Heroku, and the Cloud Foundry community on adding to the specification of buildpacks to ensure portability of buildpacks across PaaS offerings that support them and add more features.

I like how the buildpack model addresses the complexity of runtime composition for modern applications. There is elegance and simplicity in having the buildpack provider collect and configure the runtime executable as a logical unit for executing a class of applications. I think buildpacks will resonate with enterprises setting up their own PaaS environments as the means of controlling their environments, in addition to the benefits of easy deployment for developers. What is also great about buildpacks is their extensibility, which we discovered first hand. You may think of buildpacks as being language runtimes but they can be used more broadly. For example, we envision buildpacks with specialized frameworks for mobile or business rules. These specialized buildpacks will include the necessary runtimes; for they would be bundled with IBM Java and the IBM WebSphere Application Server Liberty Core (IBM Liberty for short).

New Preview Buildpack for IBM Java and Liberty

We have developed a new buildpack that includes IBM Java and the WebSphere Application Server Liberty Core . From IBM we have had Ben Smith and Michael Fraenkel working with the Pivotal Java buildpack team, Ben Hale (yes the multiple Bens gets confusing), Glyn Normington, and Ryan Morgan. It has been a terrific, enjoyable collaboration. Ben Smith sat with Ben Hale and tested out the extensibility instructions for the latest, updated Java buildpack. It was a win-win as the recently updated documentation was improved and the initial IBM Java/Liberty buildpack was developed quickly.

A preview of the Liberty buildpack is available for POCs in private deployments of Cloud Foundry. We’re also working to make the preview Liberty buildpack more broadly available for developers as well as enhancing the buildpack, so stay tuned.

One of the environments we have made the Liberty buildpack preview available is in IBM’s implementation of our open cloud architecture (which is built on Cloud Foundry and OpenStack), something we call Project BlueMix. We are starting POCs with clients on Project BlueMix, where they can explore the preview Liberty buildpack and new services based on IBM’s software capabilities.

My role in Project BlueMix is to lead client POCs and drive definition of our new services, buildpacks, and contributions to Cloud Foundry based on those client experiences. I’m also the focal point for the WebSphere team working with Pivotal and frequently spend time at the very nice Pivotal SF office. You can learn more and get started with us at or email me,

Why IBM Java and Liberty?

So why would a developer want to use IBM Java or the WebSphere Liberty container? IBM Java has lots of improvements over other JREs, but the main advantage it has in a Cloud Foundry context, is that it is really, really fast. That may not matter for some apps, but for others you will want the speed.

WebSphere Liberty is built to be a lightweight, modular, dynamic app server, particularly for cloud applications. Liberty has composable features for Java EE Web Profile and OSGi applications along with a simple configuration to provision only those required by the deployed application. We can minimize what is loaded to just those features needed by an application. Liberty has a simple XML configuration model and has rapid start times (well suited for cloud).

Up Next

Beyond making IBM Java/Liberty buildpack more widely available and iteratively add more features based on client POCs, we want to collaborate with the community on expanding support for buildpacks for Java and beyond. Topics we are interested in include ease of extending a buildpack, how to best handle versions and updates, dynamic service bindings, configuring default buildpacks for organization and spaces, improved debugging and logging, and support for commercialization of buildpacks in public cloud environment. I look forward to lively buildpack discussions as one of the topic areas at Platform Cloud Foundry conference, Sept. 8-9.

About the Author

Rachel Reinitz is an IBM Distinguished Engineer in IBM Software Services. She works with clients on adoption of new technologies and is currently focused on applications and services on PaaS and adoption of API Management. Her email is and you can follow her @rreinitz.

Facebook Twitter Linkedin Digg Delicious Reddit Stumbleupon Email

NTT Contributes Nise BOSH, a Tool to Speed Up BOSH Development

NTT, the world’s leading telecom, has been active in fostering the Cloud Foundry developer and user community in Japan. We’re excited about NTT Lab’s latest open source community contribution to Cloud Foundry – Nise BOSH, a lightweight BOSH emulator for locally compiling, releasing and testing services. BOSH is an open source tool chain for release engineering, deployment and lifecycle management of large-scale distributed applications and services, such as Cloud Foundry. Nise BOSH allows developers and cloud operators who use BOSH to speed up the development feedback cycle while saving effort and money. In this post, we’ll explain how all this works and why it’s useful.

Normal BOSH Deploy Cycle

When you do a bosh deploy of a large scale distributed service such as Cloud Foundry to a cluster, BOSH does a number of things on your behalf:

  1. Prepares deployment
  2. Compiles packages – Calculates all packages and their dependencies that need to be compiled. It then begins compiling the packages and storing their output in the blobstore. The number of workers specified in the deployment configuration determines how many VMs can be created at once for compiling.
  3. Prepares DNS
  4. Creates bound missing VMs – Creates new VMs, deletes extra VMs.
  5. Binds instance VMs
  6. Prepares configuration
  7. Updates/deletes jobs – Deletes unneeded instances, creates needed instances, updates existing instances if they are not already updated. This is the step where things get pushed live.

In step #2 BOSH spins up a number of worker VMs to compile your code packages, then installs your compiled code in one or more VMs called stemcells. Normally all of these VMs are running somewhere on an IaaS provider such as VMware vSphere or AWS EC2.


Using Nise BOSH to Speed Up the Feedback Cycle

This is great in production, but when iterating on a BOSH Package, spinning up multiple VMs for compilation and deployment makes for a slow feedback cycle and a lot of ssh’ing into servers. It can also be costly, as you may have to pay for development resources on an IaaS, e.g. on AWS. Nise BOSH is a great help when developing a BOSH package. A Package is a collection of source code along with a script that contains instruction how to compile it to binary format and install it, with optional dependencies on other pre-requisite packages. Without Internet access or an IaaS, Nise BOSH will compile the packages necessary to run the job and install them on the box that you’re on, saving you the time, expense and complexity of using remote resources on AWS or another IaaS. Let’s see how this works.

First, start on a machine that looks like your target Stemcell. A Stemcell is a VM template with an embedded BOSH Agent. The Stemcell used for Cloud Foundry is a standard Ubuntu distribution. For this you can use Vagrant, a handy tool to create easily reproducible environments on your local laptop. Install Vagrant and then in a terminal:

$ vagrant init lucid64 
$ vagrant up
$ vagrant ssh

You should now be in an Ubuntu lucid64 environment on your local machine. Now install Nise BOSH –

$git clone
$cd nise_bosh
$bundle install

Next you need to get a BOSH Cloud Foundry release – a BOSH release is a collection of source code, configuration files and startup scripts used to run services, along with a version number that uniquely identifies the components.

$git clone 
$cd cf-release 
$git submodule sync 
$git submodule update --init --recursive

You can now modify the Cloud Foundry open source code in cf-release/src as your project needs require. Once you are ready to test your changes, you need to create the release, but first install the BOSH CLI -

$gem install bosh_cli 
$bosh create release

Before you can deploy your release you’ll need a BOSH manifest file we’ll call deploy.conf. An example of the contents for a Nise BOSH manifest can be found in the Nise BOSH doc.

Now run Nise BOSH from the nise_bosh directory to launch dea_next. Without Internet access or an IaaS, Nise BOSH will compile the packages necessary to run the job and install them on the box that you’re on. Then just run run-job start to start the jobs in /var/vcap/bosh/etc/monitrc locally on your box.

$sudo PATH=$PATH bundle exec ./bin/nise-bosh ~/cf-release ~/deploy.conf dea_next
$./bin/run-job start

Nise BOSH is all set up to help you play with dea_next all by itself, but eventually it might enable to you develop and run Cloud Foundry right on your local machine. We see the many community contributions like NTT Nise BOSH as proof of the power of an Open PaaS ecosystem to advance the project. Next time you’re working on a BOSH release, give it a try.

–Matthew Kocher and Vikram Rana, Cloud Foundry Team

Facebook Twitter Linkedin Digg Delicious Reddit Stumbleupon Email

Use of Hybrid PaaS Now and In the Future

From its beginning, Cloud Foundry has been committed to providing developers and enterprises choice of deployment options spanning public and private clouds. In this post, Jonas Partner, CEO of CloudCredo and OpenCredo, shares how their enterprise customers are using PaaS to deploy applications that span public and private clouds to increase availability, add extra capacity and prevent lock-in.

As one of the co-founders of OpenCredo I have been actively working with Cloud Foundry since its early days. We were one of the first companies to launch a production application on Cloud Foundry with the help of our friends at Carrenza. More recently we have established CloudCredo to help both enterprise customers wishing to host Cloud Foundry on their own infrastructure and cloud service providers wishing to offer Platform as a Service. We helped Ospero to bring a Cloud Foundry-based PaaS running on vCloud Director to market to better serve their customers.

With the growing availability of public Cloud Foundry instances, we are seeing an increase in interest from enterprise customers in hybrid cloud as a way to extract maximum value from their existing infrastructure while enabling capacity on demand through expansion into public cloud. In this blog we will discuss why hybrid PaaS with Cloud Foundry is gaining interest, some of the ways people are using it today and how we hope to be using it in the future.

Why Cloud Foundry for Hybrid PaaS

The availability of a PaaS that seamlessly spans in-house data centers and various public cloud offerings makes hybrid cloud a compelling commercial proposition. For example, it serves to drive competition among public cloud providers. Extra capacity can be bought from the most cost competitive provider, or spread across a number of providers, as determined by the best commercial fit for the need. To date the majority of public PaaS offerings have been closed source and can’t be run within the enterprise data center. Combine this form of lock-in with some well-publicized outages and most large enterprises simply aren’t willing to consider betting everything on a single instance cloud provider. We see the open, extensible nature of Cloud Foundry and the increasing number of companies offering hosted Cloud Foundry instances as key to making enterprises comfortable with PaaS.

For us the number of companies offering or intending to offer a Cloud Foundry instance is reaching a critical mass, giving enterprises a competitive set of choices of where to run their applications. The Cloud Foundry Core initiative also means that customers can rely on public Cloud Foundry Core compatible instances to meet certain minimum requirements. Taken alongside improved support for running Cloud Foundry on public IaaS such as EC2, enterprises now have three real options for deploying cloud applications: 1) on a Cloud Foundry instance running in house on existing infrastructure, 2) on a Cloud Foundry instance running on large scale public IaaS such as EC2 or 3) on one of the increasing number of cloud providers offering Cloud Foundry as a service.

Being able to pick a company offering a PaaS hosted within national boundaries is already broadening the applicability of PaaS to regulated industries that have requirements on data sovereignty and how data is managed. There are still many other considerations around technology stacks and non-functional requirements that determine whether Cloud Foundry is suitable for a particular workload.

Simple Hybrid Cloud Use Cases

We are seeing significant interest in hybrid PaaS as a way of adding capacity in peak periods and increasing availability. Currently this only works out of the box for simpler cases where 1) there is no need to massively scale the persistence technology and 2) the application’s database does not need to be shared between services on either side of the public private cloud divide. Where enterprises employ SOA it can also be challenging to find a subset of the services that can be moved out to the public cloud. While services can span public and private cloud, this adds complexity, potential performance challenges and cost where data stored in the public cloud is separately billable.

The factors that determine the simplicity of an application for the purposes of a hybrid cloud scenario are primarily how coupled it is to the rest of the enterprise. This coupling can take the form of persisted data, messaging and application services. The simplest case is an application that has none of these forms of dependencies – with the increasing adoption of SOA it is increasingly rare to find these types of applications. When considering the potential use of hybrid cloud, the cost benefit versus the complexity due to connectedness to the rest of the enterprise will be one of the key trade-offs.

cloud credo fig1

Unless for some reason you have a large number of island-like applications then batch processing applications can often provide a fertile ground in the early days of adopting a hybrid cloud deployment strategy. Batch applications often combine minimum data connectivity to the rest of the application, often running on a data snapshot, with providing cost benefit from scaling up just for the batch run.

Scalable Database Services

The open source Cloud Foundry project provides a number of open data technologies that can be configured to “scale up” as single instance services out of the box. We see incorporation of scale-out, multi-instance database services as key to widening the use cases that can be addressed by Cloud Foundry and are currently working to address this opportunity for our customers.

At CloudCredo we recently open sourced a simple Cassandra service integration for Cloud Foundry on GitHub. The initial implementation was targeted mainly at development and test usage since it was single instance and therefore didn’t provide any way to make the database scale out or highly available. We are now extending that out to a multi-node cluster and we intend that other services such as MongoDB will also follow.

Scalable Databases as a Service can be plugged in to Cloud Foundry very easily by leveraging the Service Broker. This pattern allows the cluster to be managed from outside of Cloud Foundry but for the ‘external’ cluster service to be exposed to clients as a running Cloud Foundry service. The benefits of this model include Cloud Foundry features such as central configuration management and automatic service binding from applications.

cloud credo fig2

While horizontal scaling and clustering of open database services opens up new potential, what really excites us is the potential to use the multi-data center capabilities of persistence technologies in conjunction with hybrid PaaS. Cassandra and other open data technologies such as Riak make it possible to have a logical database which spans multiple data centers and therefore potentially both public and private cloud.

cloud credo fig3

A PaaS which integrates multi-data center databases that are easy to deploy and provide data synchronization would make solving problems such as disaster recovery and transparent scaling of both database and application code much easier. Disaster recovery currently relies on expensive low latency RPO systems that require manual, offsite back-ups be taken to mirror production. A multi-node, multi-data center data service would have the potential to automatically mirror data across data centers as part of the online transaction processing of the application. Having effectively the same database available within both public and private PaaS would also simplify the process of scaling out.


The growing popularity and increasing number of ways to consume Cloud Foundry is making hybrid PaaS an appealing reality for many enterprises. Cloud Foundry provides the Service Broker to facilitate integration of new, clustered open data technologies and legacy clustered databases, easing enterprise adoption in cases where the database services that come as part of the platform are not sufficient. The combination of reliable open source technologies that can deal with databases and the increasing availability of Cloud Foundry is an exciting option for those trying to use PaaS within the enterprise. Being able to solve these sorts of traditionally expensive problems with a PaaS will make PaaS even more compelling for large enterprises.

About the Author: Jonas Partner is CEO of OpenCredo, a software consultancy and delivery firm with offices in the United Kingdom and North America. Recently, Jonas co-founded CloudCredo, a company focused on helping businesses get Cloud Foundry up and running on the infrastructure of their choice. Before establishing OpenCredo in 2009, Jonas was a consultant with Spring Source where he contributed to the Spring Integration project. Jonas maintains interests in Machine Learning and the scalability of highly concurrent enterprise systems.

Facebook Twitter Linkedin Digg Delicious Reddit Stumbleupon Email

Anchora Brings Cloud Foundry Core Instance to China

A fast growing mobile apps market alongside the increasing need for a unified enterprise private cloud platform requires a new generation of open PaaS solutions in China.

In this guest blog, Wei-Min Lu, founder and CEO of Anchora, explains why Anchora selected Cloud Foundry as the basis for its MoPaaS platform and why it chose to join Cloud Foundry Core.

Test MoPaaS for Cloud
Foundry Core Compatibility here.

Today we are excited to join Cloud Foundry Core as proof of our commitment to cloud application portability across a choice of public and private clouds. Here in China, there is growing demand for an open PaaS that enables developers to build mobile applications and enterprises to build private or dedicated clouds. We built MoPaaS, a cloud application engine based on the Cloud Foundry service platform. Cloud Foundry gives us the flexibility to offer both a public cloud platform with extensions for mobile and data applications and a private/dedicated PaaS solution that integrates with enterprises’ existing infrastructure. We also provide services within Cloud Foundry applications allowing enterprises to take advantage of big data infrastructure for use cases like cloud storage and synchronization, analytics and search.

The Need for Open PaaS in China

It is projected that PaaS will grow at 30% annually over the next few years. Higher growth is expected to occur in China due to 1) a rapidly expanding mobile application market requiring an agile back-end services platform and 2) the need for cloud application portability and the rise of applications taking advantage of big data infrastructure in the enterprise.

China is now the world’s biggest smart phone market with 390 million units projected to be sold this year – more than 1/3 of smart phones to be sold globally in 2013. Businesses of all sizes are looking to take advantage of this trend, increasing the number of mobile apps by 35% annually. There is a growing need for an agile platform with additional services required for mobile applications, such as push notification, geospatial data storage and short message services. Due to the limited resources on the client device, mobile applications require a platform that is able to process large quantities of data in the cloud. A new generation of Open PaaS is needed to handle additional mobile-specific requirements.

With more than 70% of companies implementing PaaS over the next two years, we see emerging needs for PaaS solutions in the China enterprise IT market to consolidate existing applications and infrastructure while building an ecosystem of new enterprise applications in a consistent and unified manner. Enterprise application portability is increasingly becoming a critical need – businesses expect to be able to move their applications for optimal service, regulatory compliance, and data privacy requirements. As enterprises are realizing the impact and starting to build apps to take advantage of big data, there is an urgent need for more sophisticated platform services with big data infrastructure. This includes scalable services for log analysis, cloud storage and synchronization, information clustering, analytics, and customized search.

Building a Public PaaS Using Cloud Foundry

We built MoPaaS to serve developers’ needs to create applications and services without worrying about backend IT tasks. It helps them simplify and automate application lifecycle management, including development, deployment, and operation, significantly reducing IT infrastructure expenditures and devops costs and time. MoPaaS is built on an intelligent cloud service platform infrastructure that consists of two integrated parts:

  • Service platform: extends the Cloud Foundry service platform to provide better support for mobile applications, platform monitoring and management, and service extensibility.
  • Data platform: based on innovative information chain management (ICM) technology. It offers a simple interface for developers to support data driven applications.


Fig 1. MoPaaS Cloud Application Engine Architecture

MoPaaS provides necessary devops automation, masks complexities from developers and enhances usability by offering additional functions:

  • UI: providing ease of use for developers of various levels and a CLI providing more flexibility and control for experienced developers. Developers can use the MoPaaS UI for devops tasks like application deployment and performing resource allocation.
  • MDSS (mass data storage service): a high performance distributed storage system that provides cloud storage for user data and application source code management.
  • Application monitoring and management: a streamlined visual tool and web console for monitoring and managing applications, services, and environment variables.
  • MoPaaS Services: Mobile specific services, such as notifications, short messages, and locations etc. Data services, such as scalable services for log analysis, cloud storage and synchronization, information clustering, and customized search.

Building an Enterprise PaaS Solution Using Cloud Foundry

MoPaaS also provides an intelligent IaaS-PaaS solution for enterprises and service providers to build dedicated clouds to boost service agility, reduce IT expenditures and devops costs while increasing the portability, scalability and reliability of enterprise applications. This solution offers necessary granular controls and customer configuration flexibility provided by IaaS and the devops automation and ease of app creation provided by PaaS. The MoPaas architecture strictly separates PaaS from IaaS services.


Fig 2. MoPaaS Intelligent Cloud Platform

In addition to the services offered by Cloud Foundry, MoPaaS offers a comprehensive array of data services as native Cloud Foundry services and integrates various legacy services.


Fig 3. MoPaaS Services

MoPaaS effectively addresses a wide range of the enterprise cloud platform requirements, including:

  • Application consolidation: there is a need for building and running an ecosystem of enterprise applications on a cloud-based platform in a consistent and unified manner. However, many of the applications in enterprises have been around for quite some time and were not designed for the cloud. Among the key requirements, legacy services used by those applications need to be integrated into the cloud platform or upgraded to new services of the cloud platform. MoPaaS addresses the application consolidation and migration issues effectively, as the MoPaaS platform not only supports basic services, but also adds new data services to meet enterprise big data requirements using Cloud Foundry service gateway and seamlessly integrates legacy services used by existing applications, such as Oracle Database, Microsoft SQL Server, and IBM DB2 using Cloud Foundry service broker (Figure 3).

  • Data platform: Big data has increasingly become top of mind across organizations, from the data center to the boardroom. Companies are seeking systems and tools capable of tackling the challenges of massive data growth. Integrated with Hadoop-based Information Chain Management (ICM) data services, MoPaaS enables organizations to deploy a central platform to solve end-to-end data problems that involve any combination of information ingestion, processing, storage, exploration, analytics, and retrieval. With MoPaaS enterprise developers can easily integrate sophisticated big data services with their applications as native Cloud Foundry services. These include log analysis, cloud storage and synchronization, information clustering, and customized search. In addition, with ICM data services, MoPaaS provides new capability for monitoring and managing applications on the cloud platform.

  • Application portability: MoPaas architecture strictly separates PaaS from IaaS services. MoPaaS furnishes applications with an infrastructure-agnostic execution method, standardizes the method of exposing services such as databases, messaging and queuing, runtime, and management, and standardizes service credentials across runtimes. Thus MoPaaS keeps the applications portable in case they need to move the PaaS layer and application services to another infrastructure provider. Additionally, MoPaaS ICM mechanism enables automatic application data transferring along with the associated applications via its data synchronization service. Developers don’t have to worry about infrastructure-specific requirements for their applications, because the MoPaaS platform takes care of these concerns for them.


The initial success of Cloud Foundry-based PaaS providers demonstrates the strength of the Cloud Foundry open PaaS solution and the power of its open source ecosystem and community. This is a win for everyone invested in the success of Cloud Foundry.

We are excited to become the first company in China to join the Cloud Foundry Core community. By being a member of the Core ecosystem, Anchora is committed to the open PaaS concept and delivering cloud portability to Chinese application developers and service providers.

About the Author: Wei-Min Lu is the founder and CEO of Anchora. Anchora provides both MoPaaS, a public open cloud platform with extensions for mobile and data applications, and a dedicated PaaS solution that facilitates application consolidation and cloud platform portability for enterprises. Dr. Lu has over 15 years experience in product development and marketing. Before he founded Anchora in 2008, he served in key engineering and management positions at IBM and NASA-JPL. His background is in cloud computing, cybernetics, machine learning, information retrieval and storage technologies. Dr. Lu holds a Ph.D. in Electrical Engineering/Math from CalTech and a B.Sc. from Tsinghua University, China.

Facebook Twitter Linkedin Digg Delicious Reddit Stumbleupon Email

Continuous Integration to Cloud Using Jenkins in the Cloud

Continuous integration using Jenkins is increasingly seen as an effective tool for reducing the cycle time from product backlog to receiving actual user feedback. This can result in real increases in developer and team productivity when combined with an Open PaaS such as Cloud Foundry.

In this guest blog, Mark Prichard, Senior Director of Product Management for CloudBees®, explains how to create a fully automated environment for application build, test and deployment to Cloud using Jenkins in the Cloud via the CloudBees DEV@cloud™ build service.

Spring Framework is the most widely adopted development model for Java-based enterprise applications, used by millions of developers for its powerful abstractions and declarative support for crucial application infrastructure concepts. Jenkins is the world’s number one open-source continuous integration (CI) platform, with deep roots in the Java community and 60,000 installations worldwide. These two technologies are part of the fabric that has made Java a highly productive, agile programming environment for business applications. With the rise of cloud computing and Platform as a Service (PaaS), developers want CI services to build, test and deploy their Spring applications entirely in the cloud, using elastic, on-demand resources. You can now use Jenkins via CloudBees DEV@cloud to continuously deploy to Cloud

Usage of Continuous Integration and Delivery for Release Management

In our most recent annual Jenkins CI user survey, a couple of facts stood out: first, 74% of users are building Java applications with Jenkins (although we also see significant development in C/C++, Javascript, Python, C#, PHP, Ruby, Scala and Groovy); and second, we are seeing a really big up-tick in usage by large organizations, with 60% growth among the very largest group. Overall, the number of installations is up 66% in the last year, with over 83% of those surveyed using Jenkins for what they consider mission-critical development. Looking at those results, it seems very clear that many of those large, mission-critical applications are using Java with Spring Framework – with CI provided by Jenkins.

Another key trend that is growing in importance daily is Continuous Delivery. More and more organizations are looking to embrace an agile model in which stringent, automated testing allows enhancements or “micro-releases” to go live without the traditional waterfall release cycles. We are seeing a major shift in enterprise software development to cloud-based, continuous delivery, with fully automated quality, coverage, functional and performance tests gating live deployments. This is the new best practice, and it is now available for Cloud Foundry development.

In this blog post, I’m going to show you how easy it is to set up a CI job using Jenkins via CloudBees DEV@cloud service to automatically build, test and deploy a rich Spring Framework application to Cloud

Using Jenkins in the Cloud to Continuously Deploy Spring Apps to Cloud Foundry

Here’s an overview of the process:

  1. Link your Cloud Foundry and CloudBees accounts using OAuth 2.0 for secure and automatic deployment
  2. Clone a Git repo and set up a Jenkins job to build automatically when changes are pushed (we’ve provided ClickStarts™ that automatically do these tasks for you)
  3. Set up and configure Jenkins Cloud Foundry deployment plugin to push your application to Cloud Foundry if your build succeeds

CloudBees - Cloud Foundry Flow2

That’s it: your application and its services are now running live on Cloud Foundry!

Let’s take a quick look at all this cool stuff. First, we need a way to allow secure access from your CloudBees DEV@cloud account to the Cloud Foundry deployment services. CloudBees and Cloud Foundry both support the industry-standard OAuth 2.0 protocol, allowing you to establish a trust relationship between your accounts without the need for either party to store account details like passwords from the other service. Go to, which will redirect you to the Cloud Foundry log in. Log in and authorize your CloudBees account to deploy to your Cloud Foundry account.

blog fig1

Next we want to set up a build job. That’s really easy, even if you’ve never used Jenkins before: we’ve set up ClickStarts that will clone private Git repositories with a couple of fully-featured Cloud Foundry/SpringSource examples from GitHub (springmvc-hibernate and petclinic-grails) and then create Jenkins jobs for those builds. From the Cloud Foundry ClickStarts launch page all you need to do is click on the Spring icon, enter the name you want to use for your build and in a few seconds you’ll be taken to the Jenkins build job, which will start automatically as soon as the repository is available.

Cloud Foundry ClickStarts

At this point, you have everything set up: a private Git repo to use for development, a Jenkins CI build job and automatic deployment to Cloud Foundry. The build job has been configured to use your CloudBees account name as part of the Deployment Hostname (e.g. springmvc-hibernate-< youraccount>, but of course you can change that if you like. You can now clone a local copy of the source code repository (click on the Repositories tab on the toolbar for full details) and you have a fully automated, cloud-based develop-build-test-deploy Continuous Delivery environment for your Spring/Grails applications. Every time you push an upstream commit, Jenkins will run a complete build/test and deploy to Cloud Foundry.

blog fig2

As you can see by browsing the console output for the build job, this is a substantial application that uses many aspects of Spring Framework, including Hibernate for the mapping between the Java application and two data services, Redis and PostgreSQL, that also need to be provisioned on Cloud and bound to the application. That is all done as part of the deployment; you can see the details in the Services section of the host service configuration.

Cloud Foundry ClickStarts Java7

Once the build and deployment are complete, you can go into the console output for the job and see all the details of the deployment. If you have vmc set up on your workstation, you can use vmc apps and vmc services to verify those deployments, like this:

blog fig vmc

All you have to do now is browse to those URLs and you’re up and running with Spring/Grails and Jenkins in the cloud – enjoy!



Cloud Foundry together with CloudBees Jenkins in the Cloud service give you a complete Continuous Deployment solution for your enterprise Spring Framework and Grails projects:

  • You have access to all the capabilities of a fully managed Jenkins as a service – you don’t have to set up Jenkins or the build systems.
  • You can use Jenkins to deploy your Java Spring applications seamlessly to Cloud Foundry.
  • You always have enough build capacity — CloudBees dynamically adds more build machines as you need them.
  • It’s free to get started.

Watch the video: Continuous Integration to Cloud Using Jenkins in the Cloud
Give it a try:

About the Author: Mark Prichard, Senior Director of Product Management for CloudBees, speaks and blogs regularly as an evangelist for Platform as a Service. He came to CloudBees after 13 years at BEA Systems and Oracle, where he was product manager for the WebLogic Platform. A graduate of St John’s College, Cambridge and the Cambridge University Computer Laboratory, Mark works in Los Altos, CA.

Facebook Twitter Linkedin Digg Delicious Reddit Stumbleupon Email

NTT Communications, World’s Leading Telecom, Joins Cloud Foundry Core

As the world’s leading telecom, NTT Communications has experience delivering global cloud services to enterprise customers. Enterprises in Japan are seeking the openness, choice, agility and extensibility that an Open PaaS can provide.

In this guest blog, Hideki Kurihara, Product Lead for NTT Communications’ Global Cloud Services, explains why NTT Communications selected Cloud Foundry as the basis for its Cloudn PaaS and why it chose to join Cloud Foundry Core.

Test NTT’s Cloudn PaaS for Cloud
Foundry Core Compatibility here.

We are excited to join the Cloud Foundry Core community as proof of our commitment to making cloud portability a reality for Japanese developers. As the world’s leading telecom, everyday we see customers interested in using PaaS to be more agile. But we also hear concerns about vendor lock-in and ability to meet the needs of a complex enterprise environment. We chose to build Cloudn PaaS on top of Cloud Foundry because of its multi-cloud nature, ability to integrate with existing assets, and solid API foundation for adding management and monitoring features. Using Cloud Foundry as the base, we are extending Cloudn PaaS for developers and enterprise customers in Japan. Together with other Cloud Foundry Core partners, we are delivering cloud portability to Japanese users as well as global users of Cloud Foundry.

The Drivers for Open PaaS in Japan

The customer community in Japan wants more agility, choice, extensibility and manageability when it comes to cloud. Customers across sectors are looking to innovate faster in response to a very dynamic environment. For example, cloud has been used extensively for emergency infrastructure services in response to recent natural disasters. There has been a sharp increase in demand for cloud services to support smartphone and tablet application development. More global businesses have started considering cloud services as part of their global IT standardization so applications can be ported on demand from one place to another.

Customers here also want choice – choice to add new frameworks and services, and even to port their applications back on-premise or to another provider in Japan and other countries. Enterprises tend to prefer customization to augment their business. But they also value industry or open standards in the services they consume so they can deploy their application assets in the most appropriate place. When it comes to PaaS, application portability needs to be taken more seriously. Open PaaS, that is PaaS based on an open or industry standard, is what customers are talking about here.

Choosing and Building a PaaS Around Cloud Foundry

Meeting the needs of developers and operations for a PaaS that is open, extensible, and easily managed in the enterprise environment is difficult. We chose to build Cloudn PaaS on top of Cloud Foundry specifically because of its:

1) Openness and portability

2) Flexibility – in configuration and ability to integrate with existing enterprise assets and services

3) Extensibility – the ability to add frameworks and services, but even beyond that, to build UI based tools that make it easy to manage and add resources to applications and navigate very large installations

We’ve relied heavily on the open source nature of Cloud Foundry. We have added memcached, our own filesystem service, and support for Java web application servers (Resin/Resin Pro) so our customers’ existing applications can work seamlessly on Cloudn PaaS. But Cloud Foundry Core provides our customers’ developers the choice to use a common floor of runtimes and services when programming their applications. This gives them assurance they can port their applications to a Cloud Foundry Core provider in another geography not served by NTT communications or to a Cloud Foundry instance in their data centers.

Once in their own data centers, customers typically have a heterogeneous mix of infrastructure on which they wish to deploy Cloud Foundry, depending on the SLA and tenancy models. We initially deployed Cloud Foundry on top of our Enterprise Cloud services platform using vSphere and vCloud Director. We use this instance as our own Cloud Foundry development environment and plan to use a similar environment for future enterprise/private PaaS. On the other hand, the public multi-tenant instance of Cloudn PaaS has been deployed on a different cloud platform with attributes and services more tailored for developer centric markets.

Cloud Foundry is designed with sufficient flexibility to satisfy a huge variety of needs for PaaS environments. The Cloud Foundry system consists of several components, and users can set up their environment by choosing the necessary combination and number of components for their needs Fig. 1


Cloud Foundry supports a wide range of environments—from a private PaaS on a single server to a huge public PaaS on a cluster of over a thousand servers—and can be set up to suit the scale as well as its reliability and support features. Moreover, users can add components on demand, so it is possible to start with the minimum configuration and increase the scale as the load grows.

Enterprise integration is one key area where we see area for adding value to our customers on top of Cloud Foundry. The Cloud Foundry services gateway component addresses the ability to add enterprise data services that our customers request for HA, recovery, and backup. The service gateway itself is nothing more than a REST gateway for binding any service, but the real magic is in the service node implementation that allows multi-node deployments for HA and clustering. We are working to create our own multi-node deployments of services using this component such as the aforementioned filesystem and memcached services.

NTT Communications’ Customizations of Cloud Foundry for Enterprises in Japan

Although Cloud Foundry is a promising PaaS software solution that has many good features, as described above, it is still not perfect. For example, integration with other NTT Communications’ services was naturally not provided. Therefore, we have been making efforts to extend Cloud Foundry to meet the specific needs of our enterprise customers and developers in Japan.

Contributions and Customizations to Cloud Foundry

(1) Reliability of Cloud Foundry components

Because Cloud Foundry was a very young project when we started our development, NTT Communications and NTT R&D (Software Innovation Center) have been examining the performance and scalability of Cloud Foundry by conducting various tests and fixing any problems revealed by them. For example, we solved a problem involving an important component that was a single point of failure and a problem where some components could not be restored after failures by fixing the source code and adding additional external systems.

(2) Convenience of Cloud Foundry

Since convenience is important for commercial services, we created an installer for the virtual machine container (VMC), which is the console for PaaS users, and we are now developing a function for linking the VMC and version control systems such as Git.

(3) Control Panel for user application management

Although Cloud Foundry has a flexible component system as described above, it is essential to provide a more user-friendly environment to fully leverage its features. Therefore, we have developed a control panel that allows users to manage their applications on Cloudn PaaS. The control panel displays a list of the user’s applications, information about each application, status of resources for each application instance, and each application’s logs that have been stored by means of Cloudn PaaS services. Using the control panel, operators can flexibly scale-out or scale-up their applications instances. Fig. 2ntt

Building a Cloud Foundry Community in Japan

The presence of active communities is indispensable in the long-term evolution of open source software. We launched the Japan Cloud Foundry Group and are fostering a developers’ and users’ community of Cloud Foundry in Japan. In addition, we will open the source code created by group members by committing them to the official Cloud Foundry repository, and we also provide the know-how in workshops.

About the Author: Hideki Kurihara is the Product Lead for Global Cloud Services, including PaaS, for NTT Communications. Prior to his current position, he was engaged in NTT Communications’ North American hosting and cloud business for 10 years, serving in managerial positions such as General Manager of the Enterprise Hosting business unit and Vice President of Global Product Strategy at its subsidiaries. Hideki holds a Master of Science degree from University of Tokyo in Japan and a Master of Business Administration degree from INSEAD in France.

Facebook Twitter Linkedin Digg Delicious Reddit Stumbleupon Email