Cloud Foundry Blog

Next Generation Cloud Controller: The VMC View

In my last post, I introduced you to some of the new features we are rolling out with the new cloud controller. For reference, I’ve included a block diagram of the new structure to refresh your memory.

In the previous post, the focus was on introducing the objects and briefly discussing how they are used for operational collaboration. In this post, I want to show you how the objects are used for resource accounting, how to navigate around the objects using the Cloud Foundry Command Line Interface (VMC), and then briefly show how features like custom domains use these objects as their foundation.

Resource Accounting

From the diagram above, you can see how the organization (a.k.a., org) object acts as the root object holding a collection of spaces. Each space contains a number of applications and service instances. From a resource tracking perspective, what this means is that we can easily account for memory and services in aggregate at both the space and organization level.

If you used an org to represent a project, and then used spaces to hold your production apps, your staging playground, and a playground for each developer, you might end up with a structure that looks something like this:

org: mhlsoft
  spaces:
    - name: production
      - apps:
        - name: pds
    - name: staging
      - apps:
        - name: pds
    - name: markl
      - apps:
        - name: pds_ng_node
        - name: pds
    - name: patb
      - apps:
        - name: pds_ng_go

With this structure in place, Cloud Foundry is able to tell you the resource consumption of each space and the aggregate consumption across the entire org.

Suppose your Cloud Foundry provider sold you 16GB of RAM for $380/mo. It would be nice to know how much memory you are using in aggregate, how much is being spent on your production facing apps, and how much is being used for internally facing playgrounds.

The next generation cloud controller is designed to support exactly this scenario. This allows Cloud Foundry tooling to show an organization summary like this:

The same tooling could also show a space summary, for each space, that might look something like this:

From this small description, you can see how the next generation system is designed to support the resource accounting and quota enforcement requirements of a wide variety of commercial systems based on Cloud Foundry.

Navigation with VMC

Enough of the system is working now where we can run the next generation cloud controller alongside the existing cloud controller. Given where we are with the work, I thought it would be a good idea to show you how to navigate around the system using the next generation VMC while targeting a next generation cloud controller.

The first thing to note is that we have extended the “vmc target” command to accept an optional org and space switch:

$ vmc help target
Set or display the current target cloud
Usage: target [URL]
Options:
 --url URL Target URL to switch to
 -i, --interactive Interactively select organization/space
 -o, --organization, --org ORGANIZATION Organization 
 -s, --space SPACE Space

The new -o and -s switches allow you to select an org and space within the target cloud. In  one of my test clouds, I have two orgs set up, and in one org have a few spaces, in the other, just a single space. Watch as I navigate around using “vmc target”:

# show my current context (cloud, org, and space)
$ vmc target
target: https://api.fakedomain.com
organization: pds
space: production

# switch to the staging space
$ vmc target -s staging
Switching to space staging... OK
target: https://api.fakedomain.com
organization: pds
space: staging

# switch to my other org
$ vmc target -o mhlsoft
Switching to organization mhlsoft... OK
Switching to space blaster... OK
target: https://api.fakedomain.com
organization: mhlsoft
space: blaster

The new “vmc orgs” command and “vmc spaces” command are simple enumeration commands that display the orgs that the current user belongs to and that she can target, and the spaces within that org:

# enumerate the orgs that are available to be within this cloud
$ vmc orgs
Getting organizations... OK
pds
mhlsoft

# enumerate the spaces within the current org
$ vmc spaces
Getting spaces in pds... OK
markl
production
qa
staging

# enumerate the spaces within my other org
$ vmc spaces -o mhlsoft
Getting spaces in mhlsoft... OK
blaster

The next new command to note is “vmc org”. This command lets you take a look at an org and see its spaces and options.

$ vmc help org
Show organization information
Usage: org [ORGANIZATION]
Options:
     --full                Show full information for an org 
 -o, --organization, --org ORGANIZATION Organization to show 

# show the current org
$ vmc org
pds:
  domains: none
  spaces: markl, production, qa, staging 

# show another org
$ vmc org mhlsoft
mhlsoft:
  domains: none
  spaces: blaster

As a peer to the “vmc org” command, there is the new “vmc space” command. This command lets you take a look at a space and see its org, its apps, its services, and its options:

$ vmc help space
Show space information
Usage: space [SPACE]
Options:
     --full                             Show full information 
     --space SPACE                      Space to show
 -o, --organization, --org ORGANIZATION Space's organization 

# show the current space
$ vmc space
production:
  organization: pds
  apps: pds-mgmt, pds
  services: redis, rabbitmq, redis-stats

# show a different space
$ vmc space staging
staging:
  organization: pds
  apps: stress, pds, pds-mgmt
  services: postgres, redis-e6ebd, rabbitmq-e1a06, redis-stats

As discussed in the previous post, the app names and service names are scoped to a space. This means that you can re-use app names across spaces. To see this from a different command’s perspective, take a look at the enhanced “vmc apps” command:

$ vmc help apps
List your applications
Usage: apps
Options:
     --framework FRAMEWORK    Filter by framework
     --name NAME              Filter by name
     --runtime RUNTIME        Filter by runtime
     --space SPACE            Show apps in a given space
     --url URL                Filter by url

# list apps in the current space
$ vmc apps
Getting applications in production... OK

pds-mgmt: started
  platform: sinatra on ruby19
  usage: 256M × 8 instances
  services: rabbitmq, redis-stats, redis

pds: started
  platform: node on node06
  usage: 256M × 16 instances
  services: redis, redis-stats 

# list only node apps in the staging space
$ vmc apps --space staging --runtime node*
Getting applications in production... OK

stress: started
  platform: node on node06
  usage: 256M × 1 instances
  services: rabbitmq-e1a06

pds: started
  platform: node on node06
  usage: 256M × 1 instances
  services: redis-e6ebd, redis-stats

Hopefully this will give you a good feel for how to use VMC to navigate around the system, how you can segregate apps into spaces, and how these new features will help with basic operational collaboration. All of the code that backs this system is being developed in the open, so poke around the vmc and cloud controller repos if you are curious, or better yet, come on in and help us out!

Organizations, Spaces, and Custom Domains

Finally, some of you very careful readers might have noticed that in the “vmc org” output, there is a row for “domains:”. This code is still under development, part of this phase’s work stream. This is the first step in rolling out official, integrated support for custom domains. We will talk more about this feature as it starts to take form. The short story is simple: an org can be assigned one or more domains or wild-card domains. This same capability extends into spaces with the restriction that a space may only attach to a domain enabled for the org. Once a space is enabled for a domain, then apps within that space can use that domain. Semi-fake output is included below to illustrate this point.

# show the current org, note that
# it's enabled for *.cloudfoundry.com as well
# as a custom wildcard domain
$ vmc org
pds:
  domains: *.cloudfoundry.com, *.mydomain.com
  spaces: markl, production, qa, staging 

# show the production space
# note that it is ONLY enabled for
# the custom domain, production apps
# can not accidentally live on *.cloudfoundry.com
$ vmc space production
production:
  organization: pds
  domains: *.mydomain.com
  apps: pds-mgmt, pds
  services: redis, rabbitmq, redis-stats

# show the staging space
# note that it is ONLY enabled for *.cloudfoundry.com
# staging apps can not accidentally live on *.mydomain.com
$ vmc space staging
staging:
  organization: pds
  domains: *.cloudfoundry.com
  apps: pds-mgmt, pds
  services: redis, rabbitmq, redis-stats

# list apps in production. note their URLs
$ vmc apps --space production
Getting applications in production... OK

pds-mgmt: started
  platform: sinatra on ruby19
  usage: 256M × 8 instances
  urls: manage.mydomain.com
  services: rabbitmq, redis-stats, redis

pds: started
  platform: node on node06
  usage: 256M × 16 instances
  urls: www.mydomain.com, mydomain.com, pds.mydomain.com
  services: redis, redis-stats 

Summary

The next generation cloud controller introduces the new organization and space objects. These objects provide the foundation for a wide range of commercial class features including operational collaboration, advanced quota management and control, custom domains, etc. While the code speaks for itself, I will continue to provide added color and commentary in these blog posts.

-markl Mark Lucovsky, VP of Engineering – Cloud Foundry

Facebook Twitter Linkedin Digg Delicious Reddit Stumbleupon Email

Heads Up on Some New Cloud Controller Features

As I discussed in my post at the end of April, the Cloud Controller is undergoing major surgery and this work is being done in the cloud_controller_ng repo. If you are following the review stream for the cloud_controller_ng project, its time to take note of the new Organization and AppSpace objects. These objects are the foundation of several new features we are rolling out this year:

  • operational collaboration
  • advanced quota management and control
  • custom domains and assorted application features

This post will focus on the objects themselves and will discuss operational collaboration to demonstrate their significance. Other features will be discussed in subsequent posts.

In order to understand the new objects, its best to briefly review the current model and to understand some of the limitations with that scheme. The diagram below is a high level view of the current cloud controller model.

Current Cloud Controller Model

In this scheme, each user account directly contains both named applications and named service instances. The names are scoped to the user account object, and only that  account can manipulate the applications and services. The simplicity of this model exposes an operational issue that can occur when more than a single person is responsible for ongoing maintenance of an application.

The issue is that when your production application is created under a given account, ONLY that account can manipulate the app (update the code, scale it out, increase its memory size, etc.). For a single developer operation, this works fine. However, once you have more than one person responsible for an app (e.g., your small 3-man startup), this approach is problematic. To compensate for this, people either use a shared account, share passwords, or have to invoke admin privileges which allow an admin to manipulate the objects in another user’s account. These solutions all work, but all are poor solutions that expose their own set of problems (inability to generate a precise audit log of who did what, too many folks with admin privileges, etc.)

The new model is designed to address the aforementioned deficiencies in a scalable and sustainable way, and at the same time, provide us with the foundation needed in order to deliver additional advanced features.

The diagram below is a high level view of the new cloud controller model.

New Cloud Controller Model

Under the new model, applications and services are now scoped to a new object called the AppSpace. Multiple users can have access to an AppSpace, and each user has a set of permissions that determine what operations she can perform against the applications and services within the space. Instead of shared accounts, sharing passwords, or invoking admin rights, you can simply create an AppSpace for your production facing applications and then allow a select group of developers to manipulate the apps and services within that space.

We have taken things a step further than this with the introduction of the Organization object. This object can contain a number of AppSpaces as well as a membership list of users, etc. If you are familiar with the GitHub account model, if you squint real hard you can see that from a scoping and permissions standpoint, a GitHub Organization and a Cloud Foundry Organization are very similar, and a GitHub repo and a Cloud Foundry AppSpace are similar.

I’ll save the details on advanced quota management and features for another post, but if you read the code and review stream you can see how we are using these new objects as a foundation for quota management, custom domains, and many more advanced features.

-markl

Mark Lucovsky, VP of Engineering – Cloud Foundry

Facebook Twitter Linkedin Digg Delicious Reddit Stumbleupon Email

Refactoring the VCAP Repo

In my previous post, I talked briefly about the vcap repo refactoring effort. This week, I want to walk you through the process in a little more detail.

If you look closely at the vcap repo, you can see that it’s a collection of major system components (dea, cloud controller, health manager, etc.). This structure is not a scalable structure for the long haul on a number of fronts.

For instance, when building releases we often find ourselves wanting different components at different stages of completion. Within the cf-release repo, we currently have a single sub-module pointer to vcap (src/core). Given the component diversity under vcap, we often find ourselves wanting to be able to manage one launch schedule for each component (e.g., manage the dea and health manager release cycles differently). The singe sub-module pointer approach was too constraining.

Moving forward, we are pulling the major components out of the vcap repo, and into their own repo. In the cf-release repo, under src, we add a new sub-module pointer to point to the new component, adding it to the release. With this structure in place, each major component can publish its own release stream of blessed changes.

The other major change that’s part of this effort is the formalization of shared vcap-common repo and how components formally link to this component.

Walking thru in in more detail in the context of the dea component: The dea repo is the long term location of the dea component’s code and test cases. This component previously lived in the vcap repo as a sub-directory.

The cf-release repo contains a sub-module pointer called “dea”, which links to the new dea repo. See: https://github.com/cloudfoundry/cf-release/tree/master/src and note the sub-module pointers for core, dea, etc. Over time, as we complete the repo refactoring work, we will have additional sub-module pointers (for health manager, cloud controller, etc.).

This repo also contains the package definitions for the various components. In the case of the dea, the packaging in cf-release now refers to the new dea repo, and not the dea sub-directory in the old vcap repo.

And finally, with this round of changes, vcap components formally link to vcap-common as a gem and using a git url in their Gemfile. For example, see the below gem reference to vcap_common from the dea’s Gemfile.

gem 'vcap_common', '~> 1.0.8', :git => 'git://github.com/cloudfoundry/vcap-common.git', :ref => '9673dced'

The repo refactoring work is moving along and if you have some cycles to help, engage with Jesse on vcap-dev@cloudfoundry.org. The dea specific change has been a work in progress over the last few weeks. The final piece is to launch on cloudfoundry.com. That step is in flight as we speak.

Facebook Twitter Linkedin Digg Delicious Reddit Stumbleupon Email

Cloud Foundry Roadmap: Below the Water Line

Earlier this month we moved to a new open source contribution process for Cloud Foundry.  As part of the new process, we also want to share more information about what code is coming in the future.  This post is the first in what will be a regular series on the Cloud Foundry roadmap.

During the Cloud Foundry Anniversary event we made a point to call out that 80% of our work is really “below the water line”. We are doing a lot of work on the core infrastructure, and only a small fraction of what we do surfaces itself as a visible feature.

For those of you watching the repos, I want to give you a little context on some of the pieces that are sitting around in the code, or are in the process of being added.

We have been working on a major deconstruction of the core cloud controller (see the event slides 7-9). This involves systematically removing responsibility from the cloud controller and moving these pieces into independently scalable components, which run with different levels of isolation. If terms like cloud controller, router, and dea are new to you, review this presentation.

If you look closely at the current code base you can see the new User Account and Authentication (UAA) component. We’ve been validating UAA on CloudFoundry.com where it is performing authentication for a subset of the accounts. The UAA code has been in the public GitHub repository for several months, and we have rolled pieces of it incrementally into CloudFoundry.com in phased deployments. We have one more turn of the deployment crank and when this is done, UAA will be the source of all authentication, and the old authentication system embedded in cloud controller can be removed. The UAA component is key for us as we start enabling more advanced forms of authentication and integration.

The deconstruction effort is nearly complete and with that behind us, the last major new replacement component is landing in the code base as we speak. What’s left of the old cloud controller is being replaced with an all new system.

Over the next several days you will see the “cloud_controller_ng” repo appear and will see the first batch of commits. Architecturally, the new cloud controller will adopt the more traditional Sinatra/Sequel framework used in several other components. Functionally, the new cloud controller exposes a new set of objects that provide additional scoping and sharing semantics designed to support operational collaboration and advanced, pooled quota controls. Watch for this repo, and then carefully read the new model and note the new “org” and “app-space” objects.

In the future, this sort of development will happen directly in the Gerrit-based open repos. Moving in code at this stage of development is not our normal mode of operation, but during the final steps of the transition, we have a handful of repos whose move is still in-flight.

The next major component to watch for is the next-generation vmc client. The vmc gem exposes the core cloud foundry API as a simple-to-use set of ruby objects. Unfortunately, the way the API is exposed, some of the key functionality (create and update applications) is poorly exposed through the object model. You literally have to cheat and use pieces of the cli class hierarchy in order to use the gem with these functions. The NG version of vmc addresses this with a completely clean and well-factored object model. In addition, it’s made major improvements in its approach to extensibility. Its approach to scripting and integration is well thought out. And finally it eliminates the use of fake tables in its output.

I’ll do my best to keep you updated as there is a lot of activity in the code base. On your own though, the best way to stay up to date and engaged is to join the code review discussions or the project-level discussions on vcap or BOSH.

-markl

Mark Lucovsky, VP of Engineering – Cloud Foundry

Facebook Twitter Linkedin Digg Delicious Reddit Stumbleupon Email

The New CloudFoundry.org = Gerrit + Jenkins + GitHub

When we launched Cloud Foundry last year we started with an inefficient open source process based on a complex dual repo structure. The workflow was cumbersome for us to maintain, and at times frustrating for you to consume.

Today we are launching a new OSS contribution process based on a fully integrated Gerrit/Jenkins/GitHub workflow. In this workflow, Cloud Foundry contributors send their commits to a public Gerrit server. When a commit occurs, the Jenkins CI system will run various tests. If the tests succeed, the commit is marked as “Verified”.

The code review system allows developers to discuss and iterate on changes. Anyone can comment and vote +1 or -1 for a change, while committers can vote +2 (i.e. approve a change). Once the +2 has been issued and the change has been verified the contributor is free to submit the change.

The following diagram illustrates this workflow:

This new code review process is similar to that used by projects such as Android and OpenStack. Cloud Foundry development will occur in the open and the entire community will have visibility into the project’s progress.

Documentation on how to contribute to Cloud Foundry open source is available at http://cloudfoundry.org.

We are looking forward to working with the open source community using this new process. Thank you all for your interest in and support for Cloud Foundry.

Thanks, Mark Lucovsky, VP of Engineering

Facebook Twitter Linkedin Digg Delicious Reddit Stumbleupon Email