Cloud Foundry Blog

Cloud Foundry Now Supports Play!

Cloud Foundry now supports Play 2.0 as a first-class framework. Play is a lightweight, stateless, web-friendly framework for Java and Scala. Developers can leverage this event-driven non-blocking IO architecture to build highly scalable applications. Play 1.0 applications were previously deployable to Cloud Foundry as WAR files. Play 2.0, which doesn’t have built-in support for WAR files, can now be deployed to CloudFoundry.com and take advantage of being a fully supported framework that includes auto-reconfiguration, simplified service connections, and automatic database management. Play developers, welcome to Cloud Foundry!

Getting Started with Play 2.0

First, we will need to install or update the Cloud Foundry command line tool, VMC, to the latest version by using the following command:

gem install vmc 

We can verify that we have the right version using:

vmc -v 

which should show the version to be 0.3.18 or higher. Now let’s get started with the Java zentasks sample found in the Play 2.0 distribution. We’ll run the “play dist” command which will compile our code, retrieve all the required dependencies and create a self-contained binary that can be uploaded to Cloud Foundry.

dev$: cd play-2.0.1/samples/java/zentasks
zentasks$: play clean dist
[info] Loading project definition from /Users/jencompgeek/development/resources/play-2.0.1/samples/java/zentasks/project
[info] Set current project to zentask (in build file:/Users/jencompgeek/development/resources/play-2.0.1/samples/java/zentasks/)
[success] Total time: 0 s, completed May 15, 2012 2:27:29 PM
[info] Updating {file:/Users/jencompgeek/development/resources/play-2.0.1/samples/java/zentasks/}zentask...
[info] Done updating.
[info] Compiling 10 Scala sources and 9 Java sources to /Users/jencompgeek/development/resources/play-2.0.1/samples/java/zentasks/target/scala-2.9.1/classes...
[warn] Note: Some input files use unchecked or unsafe operations.
[warn] Note: Recompile with -Xlint:unchecked for details.
[info] Packaging /Users/jencompgeek/development/resources/play-2.0.1/samples/java/zentasks/target/scala-2.9.1/zentask_2.9.1-1.0.jar ...
[info] Done packaging.

Your application is ready in /Users/jencompgeek/development/resources/play-2.0.1/samples/java/zentasks/dist/zentask-1.0.zip

Now we can deploy the application to Cloud Foundry with the VMC push command:

zentasks$: vmc push --path=dist/zentask-1.0.zip
Application Name: zentasks
Detected a Play Framework Application, is this correct? [Yn]:
Application Deployed URL [zentasks.cloudfoundry.com]:
Memory reservation (128M, 256M, 512M, 1G, 2G) [256M]:
How many instances? [1]:
Create services to bind to 'zentasks'? [yN]: y
1: mongodb
2: mysql
3: postgresql
4: rabbitmq
5: redis
What kind of service?: 3
Specify the name of the service [postgresql-38199]: tasks-db
Create another? [yN]:
Would you like to save this configuration? [yN]: y
Manifest written to manifest.yml.
Creating Application: OK
Creating Service [tasks-db]: OK
Binding Service [tasks-db]: OK
Uploading Application:
  Checking for available resources: OK
  Processing resources: OK
  Packing application: OK
  Uploading (186K): OK
Push Status: OK
Staging Application 'zentasks': OK
Starting Application 'zentasks': OK

Looks like zentasks deployed successfully. Let’s check the logs:

zentasks$: vmc logs zentasks
====> logs/stdout.log <====

Auto-reconfiguring default
Enabling JPA auto-reconfiguration
Play server process ID is 13269
[warn] play - Plugin [play.db.jpa.JPAPlugin] is disabled
[info] play - database [default] connected at jdbc:postgresql://172.31.244.70:5432/dd2c9bc5b72134998adcfe4dcfa6660f4
[info] play - Application started (Prod)
[info] play - Listening for HTTP on port 59907...

Our Play 2.0 application is up and running on Cloud Foundry in 2 simple steps, no modification required!

Like most Play applications, zentasks contains database evolutions. Cloud Foundry automatically applied these evolutions to the database on application start. But how was the app able to make use of the PostgreSQL service we provisioned and bound to the application during deployment? If we look at the application.conf file, we see that the application is configured to use an in-memory database:

 
db.default.driver=org.h2.Driver db.default.url="jdbc:h2:mem:play"

Cloud Foundry actually used a mechanism called auto-reconfiguration to automatically connect the Play application to the relational database service. If a single database configuration is found in the Play configuration (for example, “default” from above) and a single database service instance is bound to the application, Cloud Foundry will automatically override the connection properties in the configuration to point to the PostgreSQL or MySQL service bound to the application. This is a great way to get simple apps up and running quickly. However, it is quite possible that your application will contain SQL that is specific to the type of database you are using. For example, several of the samples that come with Play make use of sequences in evolution scripts. This, of course, works with the in-memory database and will also work on PostgreSQL, but it will not work on MySQL. In these cases, or if your app needs to bind to multiple services, you may choose to avoid auto-reconfiguration and explicitly specify the service connection properties.

Connecting to Cloud Foundry Services

As always, Cloud Foundry provides all of your service connection information to your application in JSON format through the

VCAP_SERVICES environment variable. However, connection information is also available as series of properties you can use in your Play configuration. Here is an example of connecting to a PostgreSQL service named “tasks-db” from within an application.conf file:

db.default.driver=${?cloud.services.tasks-db.connection.driver} 
db.default.url=${?cloud.services.tasks-db.connection.url} 
db.default.password=${?cloud.services.tasks-db.connection.password} 
db.default.user=${?cloud.services.tasks-db.connection.username}

This information is available for all types of services, including NoSQL and messaging services. Also, if there is only a single service of a type (e.g. postgresql), you can refer to that service only by type instead of specifically by name, as exemplified below:

db.default.driver=${?cloud.services.postgresql.connection.driver} 
db.default.url=${?cloud.services.postgresql.connection.url} 
db.default.password=${?cloud.services.postgresql.connection.password} 
db.default.user=${?cloud.services.postgresql.connection.username} 

We recommend keeping these properties in a separate file (for example “cloud.conf”) and then including them only when building a distribution for Cloud Foundry. You can specify an alternative config file to “play dist” by using “-Dconfig.file”.

Opting Out of Auto-Reconfiguration

There may be situations in which you would like to opt out of auto-reconfiguration. For example, you may have an in-memory database that should not be bound to a Cloud Foundry service. If you use the properties referenced above, you will automatically be opted-out. To explicitly opt out, include a file named “

cloudfoundry.properties” in your application’s conf directory, and add the entry “autoconfig=false“.

Debugging Your Play Application

If you are using a local Cloud Foundry setup, you can remotely debug your Play applications. Simply use the flag “--debug” when doing a “vmc push” or “vmc start“. You can then run “vmc instances” to get the debug host and port information:

zentasks$: vmc instances
+-------+---------+--------------------+-----------------+------------+
| Index | State   | Start Time         | Debug IP        | Debug Port |
+-------+---------+--------------------+-----------------+------------+
| 0     | RUNNING | 05/15/2012 05:50PM | 192.168.193.193 | 59845      |
+-------+---------+--------------------+-----------------+------------+

Just use the displayed debug IP and port in the remote debugger in your favorite IDE and start debugging!

Conclusion

We look forward to seeing your Play applications on Cloud Foundry. Please feel free to send us feedback or submit a pull request to help us improve our support for the Play Framework. Now get started building those apps!

- Jennifer Hickey The Cloud Foundry Team
Don’t have a Cloud Foundry account yet?  Sign up for free today

Facebook Twitter Linkedin Digg Delicious Reddit Stumbleupon Email

Running Standalone Web Applications on Cloud Foundry

In this final post of the four-part series on deploying standalone apps to Cloud Foundry, we will explore how to build and deploy JVM-based web applications that aren’t packaged as traditional WAR files. This includes applications that are built on top of an NIO Framework like Grizzly or Netty (notable frameworks include Blue Eyes and vert.x) and applications that ship their own container, such as an embedded Jetty server.

Deploying a Spray Application to Cloud Foundry

Spray is a suite of lightweight Scala libraries for building and consuming RESTful web services on top of Akka. Let’s deploy the Spray simple-http-server example that uses spray-can: a low-level, low-overhead, high-performance, fully asynchronous HTTP/1.1 server.

mycomp$: git clone git://github.com/spray/spray.git
mycomp$: cd spray/examples/spray-can/simple-http-server

We will use the sbt-package-dist plugin to package the app and all of its dependencies into a Zip file that we can push to Cloud Foundry. Therefore, we need to add the following files to the simple-http-server directory:

build.sbt:

 
import com.twitter.sbt._ seq(StandardProject.newSettings: \_*) packageDistZipName := "simple-http-server.zip" organization := "cc.spray" name := "simple-http-server" version := "0.1.0-SNAPSHOT" scalaVersion := "2.9.1" resolvers ++= Seq( "Typesafe repo" at "http://repo.typesafe.com/typesafe/releases/", "spray repo" at "http://repo.spray.cc/" ) libraryDependencies ++= Seq( "cc.spray" % "spray-server" % "1.0-M1", "cc.spray" % "spray-can" % "1.0-M1", "com.typesafe.akka" % "akka-actor" % "2.0" ) 

project/plugins.sbt:

  
addSbtPlugin("com.twitter" %% "sbt-package-dist" % "1.0.0") resolvers += "twitter-repo" at "http://maven.twttr.com" 

project/build.properties:

 
sbt.version=0.11.2 

Next, we need to modify Main.scala to start the HTTP server on the host and port provided by Cloud Foundry:

 
server ! HttpServer.Bind(Option(System.getenv("VCAP\_APP_HOST")).getOrElse("localhost"), Option(System.getenv("VCAP_APP_PORT")).getOrElse("8080").toInt) 

Now we are ready to build and deploy the sample!

mycomp$: sbt clean compile package-dist
mycomp$: vmc push simple-http-server --path=dist/simple-http-server/simple-http-server.zip
Detected a Standalone Application, is this correct? [Yn]:
1: java
2: node
3: node06
4: ruby18
5: ruby19
Select Runtime [/java]: 
Selected java 
Start Command: java $JAVA_OPTS -jar simple-http-server_2.9.1-0.1.0-SNAPSHOT.jar 
Application Deployed URL [None]: simple-http-server.${target-base} 
Memory reservation (128M, 256M, 512M, 1G, 2G) [512M]: 
How many instances? [1]: 
Create services to bind to 'simple-http-server'? [yN]: 
Would you like to save this configuration? [yN]: y 
Manifest written to manifest.yml. 
Creating Application: OK 
Uploading Application:   
Checking for available resources: OK  
Processing resources: OK 
Packing application: OK 
Uploading (248K): OK 
Push Status: OK 
Staging Application 'simple-http-server': OK 
Starting Application 'simple-http-server': OK

So we’ve pushed the simple-http-server Zip file as a standalone app with a Java runtime. We gave the command “java $JAVA_OPTS -jar simple-http-server_2.9.1-0.1.0-SNAPSHOT.jar” to start the server, as sbt-package-dist creates an executable jar file. Notice the use of the JAVA_OPTS environment variable. When we deploy this app, Cloud Foundry will set JAVA_OPTS to a min and max heap size based on the memory reservation we provide. If we are running against Micro Cloud Foundry or a local vcap setup, remote debug options will also be added to JAVA_OPTS if we push or start the app with –debug.

That’s right, you can remote debug your standalone JVM applications with your favorite IDE. Since “simple-http-server” needs a web port, we provided a URL to use. Notice the use of the ${target-base} variable for the domain. This will allow us to reuse the generated manifest against multiple clouds (such as public or Micro Cloud Foundry). Let’s visit the web page and confirm that the app is up and running:

Looks like we are in business with spray! Note that you can access this complete example here.

Deploying an Embedded Jetty Server to Cloud Foundry

Since its launch, Cloud Foundry has allowed you to easily deploy a wide variety of web applications to Tomcat. We take care of configuring and starting the container, you bring the web app! However, sometimes you may want to bundle your own container or web server. Standalone app support allows you to do this. Let’s see an example using

Unfiltered, a toolkit for servicing HTTP requests in Scala. We will start by using giter8 to create a simple project template:

mycomp$: g8 softprops/unfiltered
This template generates an Unfiltered project. By default it depends on "unfiltered-jetty". For AJP support, set unfiltered_module to "unfiltered-jetty-ajp".
version [0.1.0-SNAPSHOT]:
name [My Web Project]:
cf-unfiltered-sample unfiltered_version [0.6.1]:
Applied softprops/unfiltered.g8 in cf-unfiltered-sample
mycomp$: cd cf-unfiltered-sample

We need to introduce the same sbt-package-dist build settings as the previous example. Add the following to the top of build.sbt: [code language=”scala”] import com.twitter.sbt._ seq(StandardProject.newSettings: _*) packageDistZipName := "cf-unfiltered-sample.zip" [/code] And create the plugins.sbt and build.properties files in the project directory as outlined above. Finally, we need to modify Example.scala to start Jetty on the port provided by Cloud Foundry:

 
val http = unfiltered.jetty.Http(Option(System.getenv("VCAP\_APP_PORT")).getOrElse("8080").toInt) 

And we need to modify avsl.conf to write the log file to a location relative to the app's working directory:

  \[handler_h1\] ... path: log ... 

Now we are ready to build and deploy the sample!

mycomp$: sbt clean compile package-dist
mycomp$: vmc push cf-unfiltered-sample --path=dist/cf-unfiltered-sample/cf-unfiltered-sample.zip
Detected a Standalone Application, is this correct? [Yn]:
1: java
2: node
3: node06
4: ruby18
5: ruby19
Select Runtime [/java]

: Selected java Start Command: java $JAVA_OPTS -jar cf-unfiltered-sample_2.9.1-0.1.0-SNAPSHOT.jar Application Deployed URL [None]: cf-unfiltered-sample.${target-base} Memory reservation (128M, 256M, 512M, 1G, 2G) [512M]: How many instances? [1]: Create services to bind to 'cf-unfiltered-sample? [yN]: Would you like to save this configuration? [yN]: y Manifest written to manifest.yml. Creating Application: OK Uploading Application:   Checking for available resources: OK  Processing resources: OK Packing application: OK Uploading (248K): OK Push Status: OK Staging Application 'cf-unfiltered-sample': OK Starting Application 'cf-unfiltered-sample': OK

The answers we gave here are pretty much identical to those given in the first section. We provisioned a Java runtime, provided a start command that includes JAVA_OPTS, and supplied a URL. Looks like the app is up!

You can check out this complete example here.

Conclusion

In this final installment of the four-part series, we introduced new support for standalone applications and showed some examples of common uses. We would love to hear your use cases and suggestions for enhancing this support. Please visit the Forums or JIRA, or submit a pull request. We look forward to seeing your new standalone apps!

- Jennifer Hickey The Cloud Foundry Team
Don’t have a Cloud Foundry account yet?  Sign up for free today

Facebook Twitter Linkedin Digg Delicious Reddit Stumbleupon Email

Running Workers on Cloud Foundry with Spring

In the two previous posts in this series, we discussed using Cloud Foundry’s new support for standalone apps to deploy worker processes. We looked at an example using Resque for Ruby apps. In this third installment, we explore using Spring to create workers in Java apps.

Let’s walk through an example.

Deploying the Cloud Foundry Twitter Search Sample

Cloud Foundry Twitter Search includes two applications: a standalone Java application that periodically polls Twitter for tweets containing the word “cloud” and a Node.js web application that displays the results. The applications communicate via a shared RabbitMQ service. The worker publishes tweet information to a RabbitMQ exchange, and the web application consumes the tweets and pushes them to the browser using SockJS. Let’s clone the sample app from Github:

mycomp:dev$ git clone https://github.com/cloudfoundry-samples/twitter-rabbit-socks-sample
mycomp:dev$ cd twitter-rabbit-socks-sample/twitter2rabbit

The worker app is written with Spring Integration. In fact, all of the worker’s logic is contained in a single Spring context file.

mycomp:twitter2rabbit$ more src/main/resources/org/springsource/samples/twitter/context.xml
<?xml version="1.0" encoding="UTF-8"?> 
<beans 
  xmlns="http://www.springframework.org/schema/beans" 
  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
  xmlns:int="http://www.springframework.org/schema/integration" 
  xmlns:int-twitter="http://www.springframework.org/schema/integration/twitter" 
  xmlns:int-amqp="http://www.springframework.org/schema/integration/amqp" 
  xmlns:rabbit="http://www.springframework.org/schema/rabbit" 
  xmlns:cloud="http://schema.cloudfoundry.org/spring" 
  xsi:schemaLocation="http://www.springframework.org/schema/integration http://www.springframework.org/schema/integration/spring-integration-2.1.xsd http://www.springframework.org/schema/beans   http://www.springframework.org/schema/beans/spring-beans.xsd http://www.springframework.org/schema/integration/twitter http://www.springframework.org/schema/integration/twitter/spring-integration-twitter-2.1.xsd http://www.springframework.org/schema/integration/amqp http://www.springframework.org/schema/integration/amqp/spring-integration-amqp-2.1.xsd http://www.springframework.org/schema/rabbit http://www.springframework.org/schema/rabbit/spring-rabbit-1.0.xsd http://schema.cloudfoundry.org/spring http://schema.cloudfoundry.org/spring/cloudfoundry-spring-0.8.xsd"> 
<int-twitter:search-inbound-channel-adapter id="twitter" query="cloud"> 
  <int:poller fixed-rate="5000" max-messages-per-poll="10"/> 
</int-twitter:search-inbound-channel-adapter> 
<int:transformer input-channel="twitter" expression="payload.fromUser + ': ' + payload.text" output-channel="rabbit"/> 
<int-amqp:outbound-channel-adapter id="rabbit" exchange-name="tweets"/> 
<rabbit:fanout-exchange name="tweets" durable="false"/> 
<rabbit:admin connection-factory="rabbitConnectionFactory"/> 
<rabbit:template id="amqpTemplate" connection-factory="rabbitConnectionFactory"/> 
<beans profile="default"> 
  < rabbit:connection-factory id="rabbitConnectionFactory"/> 
</beans> 
<beans profile="cloud"> 
  <cloud:rabbit-connection-factory id="rabbitConnectionFactory"/> 
</beans> 
</beans>

Using Spring Integration, we’ve set up an Inbound Twitter Channel Adapter to query Twitter for the word “cloud” every five seconds. The user and text from each tweet is published to an AMQP exchange named “tweets.”  The connection to Rabbit is controlled by the choice of “default” or “cloud” profile. The cloud profile uses the cloud namespace provided by the

cloudfoundry-runtime library to create a connection to a single Rabbit service bound to the app. The only code in this project is a single class that activates the cloud profile and bootstraps the ApplicationContext. In order to run this example on Cloud Foundry, we’ll need to package up all of the dependencies. Enter the Maven Application Assembler Plugin.

<build> 
  <plugins> 
    <plugin> 
      <groupId>org.codehaus.mojo</groupId> 
      <artifactId>appassembler-maven-plugin</artifactId> 
      <version>1.1.1< /version> 
      <executions> 
        <execution> 
          <phase>package</phase> 
          <goals> 
            < goal>assemble</goal> 
          </goals> 
          <configuration> 
            <assembledirectory>target</assembledirectory> 
            <programs> 
              <program> 
                <mainClass>org.springsource.samples.twitter.Demo</mainClass> 
              </program> 
            </programs> 
          <configuration> 
        </execution> 
      </executions> 
    </plugin> 
  </plugins> 
</build>

The generated start script, target/appassembler/bin/demo, uses the JAVA_OPTS environment variable to pass options to Java. When this app is deployed, Cloud Foundry will set JAVA_OPTS to a min and max heap size based on the memory reservation we provide. If we are running against a local cloud, remote debug options will also be added to JAVA_OPTS if we push or start the app with –debug. That’s right, you can remote debug your standalone Java applications with your favorite IDE. Now we’ll do a “mvn package” and we’re ready to deploy to Cloud Foundry from our new distribution directory:

mycomp:twitter2rabbit$ vmc push twitter2rabbit --path=target/appassembler
Detected a Standalone Application, is this correct? [Yn]:
1: java
2: node
3: node06
4: ruby18
5: ruby19
Select Runtime :
Selected java
Start Command: bin/demo
Application Deployed URL [None]:
Memory reservation (128M, 256M, 512M, 1G, 2G) [512M]:
How many instances? [1]:
Create services to bind to 'twitter2rabbit'? [yN]: y
1: mongodb
2: mysql
3: postgresql
4: rabbitmq
5: redis
What kind of service?: 4
Specify the name of the service [rabbitmq-f7939]: twitter-rabbit
Create another? [yN]:
Would you like to save this configuration? [yN]: y
Creating Application: OK
Creating Service [twitter-rabbit]: OK
Binding Service [twitter-rabbit]: OK
Uploading Application:
 Checking for available resources: OK
 Processing resources: OK
 Packing application: OK
 Uploading (32K): OK
Push Status: OK
Staging Application 'twitter2rabbit': OK
Starting Application 'twitter2rabbit': OK

VMC has detected that twitter2rabbit is a standalone application, and we selected the Java runtime. We specify the command “bin/demo” to start the server using the generated script. We saved the manifest file for future deployments. The app should now be running and publishing tweets to an exchange on the twitter-rabbit service. Now let’s deploy the front-end Node.js web application to display these tweets.

mycomp:twitter2rabbit$ cd ../rabbit2socks
mycomp:rabbit2socks$ npm install
mycomp:rabbit2socks$ vmc push mytwittersearch --runtime=node06
Detected a Node.js Application, is this correct? [Yn]:
Application Deployed URL [mytwittersearch.cloudfoundry.com]:
Memory reservation (128M, 256M, 512M, 1G, 2G) [64M]:
How many instances? [1]:
Bind existing services to 'mytwittersearch'? [yN]: y
1: twitter-rabbit
Which one?: 1
Create services to bind to 'mytwittersearch'? [yN]:
Would you like to save this configuration? [yN]: y
Creating Application: OK
Binding Service [twitter-rabbit]: OK
Uploading Application:
  Checking for available resources: OK
  Processing resources: OK
  Packing application: OK
  Uploading (18K): OK
Push Status: OK
Staging Application 'mytwittersearch': OK
Starting Application 'mytwittersearch': OK

The application is pushed and bound to the same twitter-rabbit service as the Java worker, twitter2rabbit. The app will consume tweets from this Rabbit service and push them to the browser using SockJS. Let’s launch the website and watch as tweets start popping up! Here’s a screenshot of the Twitter traffic when I ran my app.

And there you have it! The web page is dynamically updated with results from the Spring Integration-powered worker app. Clone the sample application and try it yourself. The readme also contains instructions on building and deploying with Gradle (using the Gradle Application Plugin to create a distribution) instead of Maven. There are many other ways to use Spring to create worker apps, including using the Spring Task Scheduler abstraction or Spring Batch. For more examples of Spring workers on Cloud Foundry, check out Josh Long’s post on the SpringSource blog. In the next and final post of the series, we will look at another category of standalone apps: self-executing web applications. With standalone application support, the possibilities are endless.

- Jennifer Hickey The Cloud Foundry Team Don’t have a Cloud Foundry account yet?  Sign up for free today

Facebook Twitter Linkedin Digg Delicious Reddit Stumbleupon Email

Running Resque Workers on Cloud Foundry

We introduced Cloud Foundry’s new “standalone” applications feature in the first post in this four-part series. In this second installment, we will look at the most common use of a standalone application–the worker process. Workers can be used for all kinds of asynchronous background jobs, such as updating search indexes, emailing all users with a password reset approaching, performing a database backup to persistent storage, or uploading new customer data from external storage. In this post, we will walk through an example of deploying workers to Cloud Foundry using Resque.

Resque Workers on Cloud Foundry

Let’s start by cloning the Resque Demo Example.

mycomp:dev$ git clone git://github.com/defunkt/resque.git
mycomp:dev$ cd resque/examples/demo
Let’s add a Gemfile to the example to ensure that Cloud Foundry can find all required gems.

 
source "http://rubygems.org" 
gem 'sinatra' 
gem 'resque' 
gem 'rake' 
gem 'json'

We’ll run “bundle install” and “bundle package” to package the gems in vendor/cache, and we’re ready to deploy.  The resque server is a Rack app, so we’ll deploy it to Cloud Foundry as such.

mycomp:demo$ vmc push resque-server
Would you like to deploy from the current directory? [Yn]:
Detected a Rack Application, is this correct? [Yn]:
Application Deployed URL [resque-server.cloudfoundry.com]: 
Memory reservation (128M, 256M, 512M, 1G, 2G) [128M]:
How many instances? [1]:
Create services to bind to 'resque-server'? [yN]: y
1: mongodb
2: mysql
3: postgresql
4: rabbitmq
5: redis
What kind of service?: 5
Specify the name of the service [redis-2a462]: redis-work-queue
Create another? [yN]:
Would you like to save this configuration? [yN]: y
Manifest written to manifest.yml.
Creating Application: OK
Creating Service [redis-work-queue]: OK
Binding Service [redis-work-queue]: OK
Uploading Application:
Checking for available resources: OK
Processing resources: OK
Packing application: OK
Uploading (21K): OK
Push Status: OK
Staging Application 'resque-server': OK
Starting Application 'resque-server': OK
Let’s have a look at the resque-server app and add some jobs to the queue:
Now that we have some jobs, it’s time to deploy some workers!  First, we need to rename the generated manifest.yml for the Rack app, so it won’t automatically be used in the push.  We can use it again later by doing a “vmc push –manifest server-manifest.yml”.  Now, let’s push the app again as a standalone worker app.
mycomp:demo$ mv manifest.yml server-manifest.yml
mycomp:demo$ vmc push resque-worker
Would you like to deploy from the current directory? [Yn]:
Detected a Rack Application, is this correct? [Yn]: n
1: Rails
2: Spring
3: Grails
4: Lift
5: JavaWeb
6: Standalone
7: Sinatra
8: Node
9: Rack
Select Application Type: 6
Selected Standalone Application
1: java
2: node
3: node06
4: ruby18
5: ruby19
Select Runtime [ruby18]:
Selected ruby18
Start Command: bundle exec rake VERBOSE=true QUEUE=default resque:work
Application Deployed URL [None]:
Memory reservation (128M, 256M, 512M, 1G, 2G) [128M]:
How many instances? [1]:
Bind existing services to 'resque-worker'? [yN]: y
1: redis-work-queue
Which one?: 1
Bind another? [yN]:
Create services to bind to 'resque-worker'? [yN]:
Would you like to save this configuration? [yN]: y
Manifest written to manifest.yml.
Creating Application: OK
Binding Service [redis-work-queue]: OK
Uploading Application:
Checking for available resources: OK
Processing resources: OK
Packing application: OK
Uploading (0K): OK
Push Status: OK
Staging Application 'resque-worker': OK
Starting Application 'resque-worker': OK

So we’ve pushed resque-worker as a standalone app with a Ruby runtime. We gave the command “bundle exec rake VERBOSE=true QUEUE=default resque:work” to start the worker. It is recommended to use bundle exec to ensure that all required gems are available. Since resque-worker does not have a web front-end, we selected “None” for URL. Lastly, we bound the app to the same Redis service used by resque-server. If you’ve perused the resque demo example, you may have noticed that it is setup to connect to a local Redis service. However, we didn’t change the code before we pushed it. How will the app connect to the provisioned Redis service? Since we used the Ruby runtime provided by Cloud Foundry, the app will benefit from the new Ruby auto-reconfiguration support. Cloud Foundry will automatically replace the local Redis connection with a connection to the Redis service we bound to the application!

Let’s check the logs and see if the worker completed that job:
mycomp:demo$ vmc logs resque-worker
====> /logs/stdout.log <====

Loading Redis auto-reconfiguration.
*** Starting worker ubuntu:10245:default
Auto-reconfiguring Redis.
*** got: (Job{default} | Demo::Job | [{}])
Processed a job!
*** done: (Job{default} | Demo::Job | [{}])
*** got: (Job{default} | Demo::Job | [{}])
Processed a job!
*** done: (Job{default} | Demo::Job | [{}])
And we can verify that the worker has registered through the web interface:
We can even scale the workers up.
mycomp:demo$ vmc instances resque-worker +2
Scaling Application instances up to 3: OK
And now the web interface shows three workers:

And there you have it! We can now deploy Resque workers as standalone apps on Cloud Foundry. Clone the Cloud Foundry resque-sample and try it out for yourself!

Conclusion

Cloud Foundry now provides improved Resque support through standalone applications, as well as support for other Ruby worker libraries or apps. If you can package all the bits and provide a start command, you can run it on Cloud Foundry! In the next installment in this series, we will explore another example of workers in action using Spring integration. Stay tuned!

- Jennifer Hickey The Cloud Foundry Team
Don’t have a Cloud Foundry account yet?  Sign up for free today

Facebook Twitter Linkedin Digg Delicious Reddit Stumbleupon Email

Cloud Foundry Improves Support For Background Processing

Cloud Foundry has significantly enhanced support for worker applications that perform background processing by allowing applications to run on CloudFoundry.com without the application container. Cloud Foundry applications are no longer limited to Web applications that respond to HTTP requests. Instead, they can be run as executable or “standalone” applications. A standalone application is one that does not require a provided framework or container.

Many developers create distributed applications that have workers to perform specific functions and communicate via a data or messaging system, such as those developed with Spring Batch, Spring Integration, Resque, or Delayed Job. Now Cloud Foundry supports running these worker components by allowing you to push a directory or single file without choosing a pre-defined framework. Simply provide the command required to run your script or executable, chose a runtime, and you’re done. Besides background processing functions, the standalone application support enables these other types of applications as well:

  1. Container-less non-Servlet applications, such as those developed with Netty or Grizzly
  2. Web apps that run with their own bundled containers, such as Jetty

In this blog series, we will walk through several example deployments of standalone apps. We will start with a simple Hello World to illustrate the deployment steps. In future posts, we will show examples of worker apps, distributed apps, and bring-your-own-container and container-less Web apps. While the apps may vary in use case and implementation, the deployment procedure remains the same.

All of these apps are meant to be long-running, as with any other app on Cloud Foundry (meaning they can be scaled, will be monitored for health, restarted if crashed, etc.). We do not yet have support for short-lived background tasks or scheduled tasks. However, we encourage you to keep watching this space!

Getting Started with Standalone Apps on Cloud Foundry

First, install or update your Cloud Foundry command line tool (‘VMC’) to the latest version using the following command:

gem install vmc

You can verify that you got the right version using:

vmc -v

which should show the version to be 0.3.17 or higher.

Let’s start by deploying a simple Hello World Ruby application.

mycomp:$ cd simple-ruby-app
mycomp:$ ls
hello-world.rb
mycomp:$ more hello-world.rb
loop {
  puts 'Hello world'
  sleep 5
}

Since we need a long-running application, this script will print “Hello World” every 5 seconds until stopped. Let’s push this to Cloud Foundry using vmc:

mycomp:$ vmc push helloworld
Would you like to deploy from the current directory? [Yn]:
Detected a Standalone Application, is this correct? [Yn]:
1: java
2: node
3: node06
4: ruby18
5: ruby19
Select Runtime [ruby18]:
Selected ruby18
Start Command: ruby hello-world.rb
Application Deployed URL [None]:
Memory reservation (128M, 256M, 512M, 1G, 2G) [128M]:
How many instances? [1]:
Create services to bind to 'helloworld'? [yN]:
Would you like to save this configuration? [yN]: y
Manifest written to manifest.yml.
Creating Application: OK
Uploading Application:
Checking for available resources: OK
Packing application: OK
Uploading (1K): OK
Push Status: OK
Staging Application 'helloworld': OK
Starting Application 'helloworld': OK

So, what just happened?
1. vmc detected that the app was a “Standalone Application” (due to the fact that no other Framework support was detected).
2. We were asked to provide a runtime. Since the app needs Ruby to run, we chose the “ruby18″ runtime (which vmc detected as default).
3. We provided a command to use for starting the application. Since we’ve chosen a Ruby 1.8 runtime, we don’t need to provide the fully qualified path to Ruby. Cloud Foundry will automatically add Ruby 1.8 to the application’s path.
4. We chose “None” for the application URL. This will run the application without a Web port or URL. There are times when we will want a URL and Web port for a standalone application, as we’ll see in a later blog post.
5. vmc pushed the entire contents of the working directory to Cloud Foundry. Since we only had hello-world.rb in the directory, we could have also executed “vmc push –path ./hello-world.rb”. The –path option comes in handy when working with distribution zip files, as we’ll see in an upcoming post.

Let’s have a look at the application’s logs:

mycomp:$  vmc logs helloworld
====> /logs/stdout.log <====

Hello world
Hello world

As you can see, helloworld can be managed just like any other Cloud Foundry application:

mycomp:$  vmc instances helloworld +2
Scaling Application instances up to 3: OK
mycomp:$ vmc instances helloworld

+-------+---------+--------------------+
| Index | State   | Start Time         |
+-------+---------+--------------------+
| 0     | RUNNING | 04/20/2012 04:47PM |
| 1     | RUNNING | 04/20/2012 04:48PM |
| 2     | RUNNING | 04/20/2012 04:48PM |
+-------+---------+--------------------+

Let’s look at the logs again for two of the instances:

mycomp:$  vmc logs helloworld --instance 0
====> /logs/stdout.log <====
Hello world
Hello world
mycomp:$  vmc logs helloworld --instance 1
====> /logs/stdout.log <====
Hello world
Hello world

Standalone app deployment manifest

Let’s take a look at the manifest file we generated with that vmc push:

mycomp:$ more manifest.yml
---
applications:
.:
name: helloworld
framework:
  name: standalone
  info:
     description: Standalone Application
     mem: 128M
runtime: ruby18
command: ruby hello-world.rb
url:
mem: 128M
instances: 1

Seems pretty straightforward. The app is deployed against a “standalone” framework, with a command and ruby18 runtime. To save time, we’ll be sure to save this manifest file for future deployments.

Tips and Tricks

You should now be able to deploy any long-running app that you can package, using any of Cloud Foundry’s provided runtimes. However, there are some tips and tricks to getting the best experience from Cloud Foundry:

  1. Use the VCAP_APP_PORT environment variable if your application requires a web port.
  2. Connect your application to Cloud Foundry services using provided libraries (see below) or the VCAP_SERVICES environment variable (value is in JSON).
JVM Applications
  1. Package your application using a build plugin that creates a distribution zip file or directory.
    Maven AppassemblerGradle Application Plugin, and SBT package-dist are good plugins for creating a distribution, and we’ll show examples of all of these in future posts. It is best not to package your entire application in a single jar file, as you will not be able to take advantage of Cloud Foundry’s incremental upload capability.
  2. Always include $JAVA_OPTS in your Java start commands.
    When you select your application’s memory reservation through VMC, Cloud Foundry will set the JAVA_OPTS environment variable with corresponding min and max heap sizes, therefore you should include $JAVA_OPTS in your Java start commands (for example, “java $JAVA_OPTS -jar main.jar”). The start scripts generated by Maven Appassembler and Gradle Application Plugin already include JAVA_OPTS. Using JAVA_OPTS will also allow you to start your app in debug mode on local clouds, using “vmc start –debug.”
  3. Use the cloudfoundry-runtime library to connect your application to Cloud Foundry services.
Ruby Applications
  1. Always include a Gemfile.lock to ensure that all application dependencies are resolved by Cloud Foundry.
  2. While not required, we recommend running “bundle package” before deploying your application. This will improve your application start time, as Cloud Foundry will not need to check its cache or download gems from rubygems.org.
  3. Standalone Ruby applications can take advantage of Ruby Auto-Reconfiguration.  To make your own connections to Cloud Foundry services, use the cf-runtime gem.

Conclusion

In this post, we used a simple example to get up and running quickly with standalone apps on Cloud Foundry. In the next installments of this four part series, we will have an in-depth look at deploying more complex standalone apps.

- Jennifer Hickey

The Cloud Foundry Team

Don’t have a Cloud Foundry account yet?  Sign up for free today

Facebook Twitter Linkedin Digg Delicious Reddit Stumbleupon Email

Using Cloud Foundry Services with Ruby: Part 1 – Auto-reconfiguration

Right from the launch, Cloud Foundry supported auto-reconfiguration of Spring and Rails apps that use a relational database service.  This allowed deploying such an app without changing a single line of code.  Recently, we extended this support for Spring apps to cover all services (Redis, Mongo, and Rabbit).  We are now extending this support for all services for Rails and making this available for Sinatra apps as well.  In this blog, we will explore how auto-reconfiguration works with Rails and Sinatra applications.

Auto-reconfiguration in action

To demonstrate auto-reconfiguration, we will grab an application from github and deploy it to Cloud Foundry without modification.  Let’s use lamernews, a Sinatra app that uses Redis.

mycomp:lamernews$ vmc push lamernews 
Would you like to deploy from the current directory? [Yn]: 
Application Deployed URL ["lamernews.cloudfoundry.com"]: 
Detected a Sinatra Application, is this correct? [Yn]: 
Memory Reservation ("64M", "128M", "256M", "512M", "1G") ["128M"]: 
Creating Application: OK 
Would you like to bind any services to 'lamernews'? [yN]: y 
The following system services are available 
1: mongodb 
2: mysql 
3: neo4j 
4: postgresql 
5: redis 
Please select one you wish to provision: 5 
Specify the name of the service ["redis-52216"]: 
Creating Service: OK 
Binding Service [redis-52216]: OK 
Uploading Application: 
Checking for available resources: OK 
Processing resources: OK 
Packing application: OK 
Uploading (1K): OK 
Push Status: OK 
Staging Application: OK 
Starting Application: OK 

Looks like the app deployed successfully.   Lamernews uses Redis to store comments, so let’s comment on a post and verify it stores successfully.


Now I press “Send comment”, and it looks like my comment was applied.

Let’s take a look at the lamernews code that initializes the Redis connection:

 
RedisHost = "127.0.0.1" 
RedisPort = 10000 
$r = Redis.new(:host => RedisHost, :port => RedisPort) if !$r 

As you can see, the code is attempting to connect to Redis on localhost, however it worked just fine when we deployed to Cloud Foundry.  How is this possible?  Cloud Foundry will automatically detect initialization of several popular clients anywhere in your code and swap out your connection parameters for those of a service bound to your application. Read on to find out more about how this works!

Auto-reconfiguration for Sinatra

Your application consists of business logic and interaction with services such as database and messaging.  In a Sinatra application, you may initialize these services in Sinatra::Base#configure(). However, this is certainly not a requirement.  You are free to initialize your services wherever you want, perhaps lazily in response to a request. Additionally, there are several different client libraries you can use for connection to data and messaging services (ActiveRecord, DataMapper, Mongo Ruby Driver, MongoMapper, etc). For example, consider the following code that creates a Redis client:

require 'redis' 
module Demo 
class App < Sinatra::Base configure do 
  redis = Redis.new(:host => '127.0.0.1', :port => '6379') 
end 
... 
end 
end

We can make one easy observation: The Redis host and port point to a server on localhost.  When you push this application to Cloud Foundry and bind a Redis service, the URL for that service is not going to be 127.0.0.1:6379!  So without an additional mechanism, such application will fail on startup.  This is where the auto-reconfiguration mechanism comes into play.  The auto-reconfiguration mechanism leverages Ruby metaprogramming to intercept the Redis initialization and replace the connection parameters with those of the Redis service bound to the application.  The result is that the user application works in local deployment and in Cloud Foundry without any change. When your Sinatra application is staged during the deployment process, Cloud Foundry will make two modifications:

  1. It will add an additional cf-autoconfig gem to your Bundle
  2. It will wrap the execution of your main Sinatra file (e.g. app.rb) in an auto_stage.rb file that ensures that all dynamic class modification is done prior to application execution.

Auto-reconfiguration for Rails

Cloud Foundry already provides auto-reconfiguration of your database in Rails by modifying the production settings in your config/database.yml during staging.  We have now added auto-reconfiguration of Mongo, Redis, and Rabbit clients as well. For example, you may have the following in config/initializers/redis.rb:

 $redis = Redis.new(:host => '127.0.0.1', :port => 6379, :password => 'mypass') 

Once again, we can see that the Redis host and port point to a server on localhost.  Cloud Foundry will automatically replace these localhost connection parameters with those of the Redis service bound to your application. While it’s fairly common to put these types of connections in a Rails Initializer File, auto-reconfiguration should work just as well if you create the connection somewhere else within your application. When your Rails application is staged during the deployment process, Cloud Foundry will make two modifications:

  1. It will add an additional cf-autoconfig gem to your Bundle
  2. It will add an Initializer file to config/initializers that ensures that all dynamic class modification is done prior to loading other Initializers (and thus before your application executes).

Supported Clients

The following table shows the supported clients for auto-reconfiguration.

Client Minimal Supported Version
redis-rb 2.0
Mongo Ruby Driver 1.2.0
amqp 0.8
carrot 1.0
mysql2 (Sinatra only) 0.2.7
pg PostgreSQL client (Sinatra only) 0.11.0

In some cases, you don’t need to be using these clients directly. For example, the popular object mapper for Mongo, 

mongo_mapper, uses the Mongo Ruby Driver.  Therefore, if your application uses mongo_mapper, Cloud Foundry can auto-reconfigure it to connect to your Mongo service. Note that the mysql and postgresql gems are listed as Sinatra only.  This is because we auto-reconfigure relational database connections in Rails without metaprogramming, by modifying your production database settings in your database.yml file.

Under the Hood

As mentioned, we leverage Ruby metaprogramming to intercept a common set of method calls that create connections.  We then replace the connection parameters with those of the service bound to your application.  We do this with some well-known metaprogramming patterns: Open Class or Class Extension and Around Alias.  These are described thoroughly in the excellent book Metaprogramming Ruby. Here is an example from our Redis auto-reconfiguration support:

 
require 'cfruntime/properties' 
module AutoReconfiguration 
module Redis 
def self.included( base ) 
  base.send( :alias_method, :original_initialize, :initialize) 
  base.send( :alias_method, :initialize, :initialize_with_cf ) 
end 
def initialize_with_cf(options = {}) 
  service_props = CFRuntime::CloudApp.service_props('redis') 
  cfoptions = options 
  cfoptions[:host] = service_props[:host] 
  cfoptions[:port] = service_props[:port] 
  cfoptions[:password] = service_props[:password] 
  original_initialize cfoptions 
end 
end 
end 
require 'redis' 
class Redis 
  include AutoReconfiguration::Redis 
end

The code starts by opening the Redis class and adding the methods defined in our AutoReconfiguration::Redis module.  The first method, self.included, sets up an Around Alias that directs all of your calls to Redis.new to the new initialize_with_cf method.  This method utilizes our cf-runtime gem library look up the Redis service connection properties and then calls the original Redis initialize method with the changed parameters. We use a similar approach for each supported client.  Feel free to browse the code in our github repo for more details.  Let us know if there are other clients or hook points we should support (feel free to submit a pull request!).

Limitations

Auto-reconfiguration of services work only if there is exactly one service of a given service type.  For example, you may bind only one relational database service (MySQL or Postgres) to an application. If an application doesn’t follow these limitations, auto-reconfiguration will not take place.  In those cases, you can still take advantage of the cf-runtime gem described in the next blog post to manually configure service access. The auto-reconfiguration mechanism expects typical Ruby applications.  If your application configuration is complex, it may not work.  In those cases, you can opt out of auto-reconfiguration as we will describe next.

Opting out of auto-reconfiguration

There may be situations where you will like to opt out of auto-reconfiguration.  Cloud Foundry offers a few ways to opt out of the auto-reconfiguration mechanism.

  1. Create a file in your Sinatra or Rails application called “config/cloudfoundry.yml”. Add the entry “autoconfig: false”.
  2. Include the ‘cf-runtime’ gem in you Gemfile. Cloud Foundry uses this mechanism to opt out, since applications either want to have the auto-reconfiguration behavior or simply take control over service creation completely. We do not see the value in auto-reconfiguring some services and manually configuring others. If you are using cf-runtime and not using Bundler, you can still opt-out of auto-reconfiguration by creating the cloudfoundry.yml file as described above.

Conclusion

Auto-reconfiguration in Cloud Foundry is a wonderful way to get started deploying your Rails and Sinatra apps quickly.  As your application matures or makes use of multiple services, you may need finer control of your service connections.  In the next blog in this series, Thomas Risberg will explain how to use the new cf-runtime gem for simplified connections to Cloud Foundry services.

– Jennifer Hickey, The Cloud Foundry Team

Don’t have a Cloud Foundry account yet?  Sign up for free today

Facebook Twitter Linkedin Digg Delicious Reddit Stumbleupon Email

Cloud Foundry Now Supports the Rails Console

Ruby and Rails developers can now remotely access the popular rails console using the Cloud Foundry command line tool (VMC). This new feature enables inspection of the Cloud Foundry Runtime App environment, troubleshooting application issues in runtime, and even the ability to modify data “on the fly” for one-off admin tasks.  Using the new vmc rails-console command, developers can target any Cloud Foundry instance, including the upcoming release of Micro Cloud Foundry.

Getting Started

First, install or update your Cloud Foundry command line tool (‘VMC’) to the latest preview version using the following command:

gem install vmc --pre

You can verify that you got the right version using:

vmc -v

which should show the version to be 0.3.16.beta.3 or higher.

Next, push or update a Rails app using vmc push.

To access the console, run:

vmc rails-console appname

That’s all there is to it!  Read on for a more detailed example…

Using the Rails Console

Let’s get started by deploying a Rails application to Cloud Foundry. We’ll use Enki, an open source blog engine.   I’ll use the cloudfoundry-samples fork, where I’ve added the mysql2 gem to enki’s Gemfile.  Now I’m ready to push to Cloud Foundry:

mycomp:enki$ vmc push mynewblog
Would you like to deploy from the current directory? [Yn]:
Application Deployed URL [mynewblog.cloudfoundry.com]:
Detected a Rails Application, is this correct? [Yn]:
Memory Reservation ("64M", "128M", "256M", "512M", "1G") ["256M"]:
Creating Application: OK
Create services to bind to 'mynewblog'? [yN]: y
1: mongodb
2: mysql
3: postgresql
4: rabbitmq
5: redis
What kind of service?: 2
Specify the name of the service [mysql-8f1d2]:
Creating Application: OK
Creating Service [mysql-8f1d2]: OK
Binding Service [mysql-8f1d2]: OK
Uploading Application:
  Checking for available resources: OK
  Processing resources: OK
  Packing application: OK
  Uploading (20M): OK
Push Status: OK
Staging Application: OK
Starting Application: OK

Looks like the app deployed successfully.  I saved the manifest generated from this vmc output, so you can do just a simple “vmc push” with the application name and URL after cloning the sample.

I’ll make my first blog post about this new Rails Console support and wait for the comments to start rolling in…

Unfortunately, it looks like I have a nasty comment from Joe that needs deleting.  Let’s fire up the Rails Console through vmc to easily get rid of that comment.  The vmc rails-console command can be run from any directory.

mycomp:enki$ vmc rails-console mynewblog
Deploying tunnel application 'caldecott'.
Uploading Application:
  Checking for available resources: OK
  Packing application: OK
  Uploading (1K): OK
Push Status: OK
Staging Application 'caldecott': OK
Starting Application 'caldecott': OK
Connecting to 'mynewblog' console: OK

irb():001:0>

Since this is the first time I’ve run Rails Console, vmc will deploy the caldecott application for me, which helps tunnel communication to my remote Rails application.  After that, it fires up the console and waits for input.

irb():001:0> @comment = Comment.find_by_author("Joe")
#<Comment id: 7, post_id: 1, author: "Joe", body: "I have something real...">
irb():002:0> @comment.delete
#<Comment id: 7, post_id: 1, author: "Joe", body: "I have something real...">
irb():003:0> @comment = Comment.find_by_author("Joe")
nil
irb():004:0> exit

In this console session, I use Comment.find_by_author to find Joe’s comment, delete the comment, and then check to ensure that Joe’s comment has been removed.   Let’s refresh the web page and see that it’s gone:

This is just one small example of what can be done with Rails Console.  I could use it for all sorts of diagnostics as well, such as perusing the database, running ruby commands to inspect state, or interacting with controller methods to check responsiveness.  For more information, check out the Rails Console documentation or this handy screencast from RailsCasts.

Enabling the Rails Console

The console will automatically be available when you push any Rails application.  If you already have a Rails application deployed to Cloud Foundry, you will need to update or redeploy your app to stage the console support.

What are you waiting for? Go try it out!

Ready to start interacting with your Rails app on Cloud Foundry? Make sure to update to the latest vmc gem, and you should be ready to give it a whirl.  Please feel free to direct any suggestions or feedback to the support forums!

- Jennifer Hickey
The Cloud Foundry Team

Don’t have a Cloud Foundry account yet?  Sign up for free today

 

Facebook Twitter Linkedin Digg Delicious Reddit Stumbleupon Email