This guide documents all of the cartridges that are distributed with OpenShift Origin. You can learn about creating your own cartridges by referring to the Cartridge Developers Guide.
Warning
|
Cartridges and Persistent Storage: Every time you push, everything in your remote repo directory is recreated.
Store long term items (like an sqlite database) in the OpenShift data directory, which will persist between pushes of your repo.
The OpenShift data directory can be found via the environment variable |
[[10gen-mms-agent]] == 10gen MMS Agent This cartridge provides the 10gen MMS agent on OpenShift.
A few necessary steps have to be done, in order to embed the 10gen-mms-agent into your application:
-
Register at https://mms.mongodb.com
-
Go to the Settings → Monitoring Agent → Other Linux
-
Curl tarball link that is there
-
Copy the tarball into the .openshift/mms/ folder and git push it
-
Set environment variable called OPENSHIFT_MMS_API_KEY with the API Key which is in the Settings → API Settings
rhc set-env OPENSHIFT_MMS_API_KEY=your_mms_api_key -a app
-
Embed the 10gen cartridge.
-
Go to https://mms.mongodb.com and add your host, by adding MongoDB host, port and credentials into the formular.
This cartridge adds periodic job execution functionality to your OpenShift application.
To add this cartridge to your application, you can either add it when you create your application:
rhc app create <APP> ruby-1.9 cron
Or add it to your existing application:
rhc cartridge add cron -a <APP>
The jobs are organized in .openshift/cron
directory of your application’s source. Depending on how often you would like to execute the job, you place them in minutely
, hourly
, daily
, monthly
, monthly
.
The jobs are executed directly. If it is a script, use the "shebang" line to specify the interpreter to execute it.
#! /bin/bash date > $OPENSHIFT_RUBY_LOG_DIR/last_date_cron_ran
Note
|
The jobs need to be executable: chmod +x .openshift/cron/minutely/awesome_job |
Once you have created the job, add it to your application repository, commit and push.
git add .openshift/cron/minutely/awesome_job git commit -m 'Execute bit set for cron job' git push
The jobs are run by the node’s cron
at a specified frequency, however the exact timing is not guaranteed.
If this unpredictability is not desirable, you can instrument your job to inspect the date and/or time when your job runs.
For example, the following minutely
job would do anything useful only at 12 minutes after the hour.
#!/bin/bash minute=$(date '+%M') if [ $minute != 12 ]; then exit fi # rest of the script
The diy
cartridge provides a minimal, free-form scaffolding which leaves all details of the cartridge to the application developer.
-
Add framework of choice to your repo.
-
Modify
.openshift/action_hooks/start
to start your application. The application is required to bind to$OPENSHIFT_DIY_IP:$OPENSHIFT_DIY_PORT
. -
Modify
.openshift/action_hooks/stop
to stop your application. -
Commit and push your changes.
static/ Externally exposed static content goes here .openshift/ action_hooks/ See the Action Hooks documentation (1) start Custom action hook used to start your application stop Custom action hook to stop your application
-
Action Hooks documentation
Note
|
Please leave the static directory in place (alter but do not delete) but feel free to create additional directories if needed.
|
The diy
cartridge provides the following environment variables to reference for ease of use:
- OPENSHIFT_DIY_IP
-
The IP address assigned to the application
- OPENSHIFT_DIY_PORT
-
The port assigned to the the application
For more information about environment variables, consult the Users Guide
Provides the JBossAS application server on OpenShift.
deployments/ Location for built WARs (details below) src/ Example Maven source structure pom.xml Example Maven build file .openshift/ Location for OpenShift specific files config/ location for configuration files such as standalone.xml action_hooks/ See the Action Hooks documentation (1) markers/ See the Markers section below
-
Action Hooks documentation
There are two options for deploying content to the JBoss Application Server within OpenShift. Both options can be used together (i.e. build one archive from source and others pre-built)
Note
|
Under most circumstances the .dodeploy file markers should not be added to the deployments directory. These lifecycle files will be created in the runtime deployments directory (can be seen by SSHing into the application), but should not be added to the git repo. |
Method 1 (Preferred)
You can upload your content in a Maven src structure as is this sample project and on
git push have the application built and deployed. For this to work you’ll need your pom.xml at the
root of your repository and a maven-war-plugin like in this sample to move the output from the build
to the deployments directory. By default the warName is ROOT within pom.xml. This will cause the
webapp contents to be rendered at http://app_name-namespace.rhcloud.com/. If you change the warName in
pom.xml to app_name, your base url would then become http://app_name-namespace.rhcloud.com/app_name.
Note
|
If you are building locally you’ll also want to add any output wars/ears under deployments from the build to your .gitignore file. |
Note
|
If you are running scaled AS7 then you need an application deployed to the root context (i.e. http://app_name-namespace.rhcloud.com/) for the HAProxy load-balancer to recognize that the AS7 instance is active. |
Method 2
You can git push pre-built wars into deployments/
. To do this with the default repo you’ll want to first run git rm -r src/ pom.xml
from the root of your repo.
Basic workflows for deploying pre-built content (each operation will require associated git add/commit/push operations to take effect):
-
Add new zipped content and deploy it:
cp target/example.war deployments/
-
Add new unzipped/exploded content and deploy it:
-
cp -r target/example.war/ deployments/
-
edit
.openshift/config/standalone.xml
and replace<deployment-scanner path="deployments" relative-to="jboss.server.base.dir" scan-interval="5000" deployment-timeout="300"/>
with
<deployment-scanner path="deployments" relative-to="jboss.server.base.dir" scan-interval="5000" deployment-timeout="300" auto-deploy-exploded="true"/>
-
-
Undeploy currently deployed content:
git rm deployments/example.war
-
Replace currently deployed zipped content with a new version and deploy it:
cp target/example.war deployments/
.Replace currently deployed unzipped content with a new version and deploy it:-
git rm -rf deployments/example.war/
-
cp -r target/example.war/ deployments/
-
Note
|
You can get the information in the uri above from running 'rhc domain show' |
If you have already committed large files to your git repo, you rewrite or reset the history of those files in git to an earlier point in time and then 'git push --force' to apply those changes on the remote OpenShift server. A git gc on the remote OpenShift repo can be forced with (Note: tidy also does other cleanup including clearing log files and tmp dirs):
rhc app tidy -a appname
Whether you choose option 1) or 2) the end result will be the application deployed into the deployments directory. The deployments directory in the JBoss Application Server distribution is the location end users can place their deployment content (e.g. war, ear, jar, sar files) to have it automatically deployed into the server runtime.
The jbossas
cartridge provides several environment variables to reference for ease of use:
Variable | Description |
---|---|
OPENSHIFT_JBOSSAS_IP |
The IP address used to bind JBossAS |
OPENSHIFT_JBOSSAS_HTTP_PORT |
The JBossAS listening port |
OPENSHIFT_JBOSSAS_CLUSTER_PORT |
TODO |
OPENSHIFT_JBOSSAS_MESSAGING_PORT |
TODO |
OPENSHIFT_JBOSSAS_MESSAGING_THROUGHPUT_PORT |
TODO |
OPENSHIFT_JBOSSAS_REMOTING_PORT |
TODO |
JAVA_OPTS_EXT |
Appended to JAVA_OPTS prior to invoking the Java VM. |
For more information about environment variables, consult the Users Guide
Adding marker files to .openshift/markers
will have the following effects:
Marker | Effect |
---|---|
enable_jpda |
Will enable the JPDA socket based transport on the java virtual machine running the JBoss AS 7 application server. This enables you to remotely debug code running inside the JBoss AS 7 application server. |
skip_maven_build |
Maven build step will be skipped |
force_clean_build |
Will start the build process by removing all non-essential Maven dependencies. Any current dependencies specified in your pom.xml file will then be re-downloaded. |
hot_deploy |
Will prevent a JBoss container restart during build/deployment. Newly build archives will be re-deployed automatically by the JBoss HDScanner component. |
java7 |
Will run JBossAS with Java7 if present. If no marker is present then the baseline Java version will be used (currently Java6) |
The jbossas
cartridge provides an OpenShift compatible wrapper of the JBoss CLI tool on the gear PATH
, located at
$OPENSHIFT_JBOSSAS_DIR/tools/jboss-cli.sh
. Use the following command to connect to the JBoss instance with the
CLI tool:
jboss-cli.sh -c --controller=${OPENSHIFT_JBOSSAS_IP}:${OPENSHIFT_JBOSSAS_MANAGEMENT_NATIVE_PORT}
Provides the JBossEAP application server on OpenShift.
deployments/ Location for built WARs (details below) src/ Example Maven source structure pom.xml Example Maven build file .openshift/ Location for OpenShift specific files config/ location for configuration files such as standalone.xml action_hooks/ See the Action Hooks documentation (1) markers/ See the Markers section below
-
Action Hooks documentation
There are two options for deploying content to the JBoss Application Server within OpenShift. Both options can be used together (i.e. build one archive from source and others pre-built)
Note
|
Under most circumstances the .dodeploy file markers should not be added to the deployments directory. These lifecycle files will be created in the runtime deployments directory (can be seen by SSHing into the application), but should not be added to the git repo. |
Method 1 (Preferred)
You can upload your content in a Maven src structure as is this sample project and on
git push have the application built and deployed. For this to work you’ll need your pom.xml at the
root of your repository and a maven-war-plugin like in this sample to move the output from the build
to the deployments directory. By default the warName is ROOT within pom.xml. This will cause the
webapp contents to be rendered at http://app_name-namespace.rhcloud.com/. If you change the warName in
pom.xml to app_name, your base url would then become http://app_name-namespace.rhcloud.com/app_name.
Note
|
If you are building locally you’ll also want to add any output wars/ears under deployments from the build to your .gitignore file. |
Note
|
If you are running scaled EAP6.0 then you need an application deployed to the root context (i.e. http://app_name-namespace.rhcloud.com/) for the HAProxy load-balancer to recognize that the EAP6.0 instance is active. |
Method 2
You can git push pre-built wars into deployments/
. To do this with the default repo you’ll want to first run git rm -r src/ pom.xml
from the root of your repo.
Basic workflows for deploying pre-built content (each operation will require associated git add/commit/push operations to take effect):
-
Add new zipped content and deploy it:
cp target/example.war deployments/
-
Add new unzipped/exploded content and deploy it:
-
cp -r target/example.war/ deployments/
-
edit .openshift/config/standalone.xml and replace
<deployment-scanner path="deployments" relative-to="jboss.server.base.dir" scan-interval="5000" deployment-timeout="300"/>
with
<deployment-scanner path="deployments" relative-to="jboss.server.base.dir" scan-interval="5000" deployment-timeout="300" auto-deploy-exploded="true"/>
-
-
Undeploy currently deployed content:
git rm deployments/example.war
-
Replace currently deployed zipped content with a new version and deploy it:
cp target/example.war deployments/
-
Replace currently deployed unzipped content with a new version and deploy it:
-
git rm -rf deployments/example.war/
-
cp -r target/example.war/ deployments/
-
Note
|
You can get the information in the uri above from running 'rhc domain show' |
If you have already committed large files to your git repo, you rewrite or reset the history of those files in git to an earlier point in time and then 'git push --force' to apply those changes on the remote OpenShift server. A git gc on the remote OpenShift repo can be forced with (Note: tidy also does other cleanup including clearing log files and tmp dirs):
rhc app tidy -a appname
Whether you choose option 1) or 2) the end result will be the application deployed into the deployments directory. The deployments directory in the JBoss Application Server distribution is the location end users can place their deployment content (e.g. war, ear, jar, sar files) to have it automatically deployed into the server runtime.
The jbosseap
cartridge provides several environment variables to reference for ease
of use:
Variable | Description |
---|---|
OPENSHIFT_JBOSSEAP_IP |
The IP address used to bind JBossAS |
OPENSHIFT_JBOSSEAP_HTTP_PORT |
The JBossAS listening port |
OPENSHIFT_JBOSSEAP_CLUSTER_PORT |
TODO |
OPENSHIFT_JBOSSEAP_MESSAGING_PORT |
TODO |
OPENSHIFT_JBOSSEAP_MESSAGING_THROUGHPUT_PORT |
TODO |
OPENSHIFT_JBOSSEAP_REMOTING_PORT |
TODO |
JAVA_OPTS_EXT |
Appended to JAVA_OPTS prior to invoking the Java VM. |
For more information about environment variables, consult the Users Guide.
Adding marker files to .openshift/markers
will have the following effects:
Marker | Effect |
---|---|
enable_jpda |
Will enable the JPDA socket based transport on the java virtual machine running the JBoss AS 7 application server. This enables you to remotely debug code running inside the JBoss AS 7 application server. |
skip_maven_build |
Maven build step will be skipped |
force_clean_build |
Will start the build process by removing all non-essential Maven dependencies. Any current dependencies specified in your pom.xml file will then be re-downloaded. |
hot_deploy |
Will prevent a JBoss container restart during build/deployment. Newly build archives will be re-deployed automatically by the JBoss HDScanner component. |
java7 |
Will run JBossEAP with Java7 if present. If no marker is present then the baseline Java version will be used (currently Java6) |
The jbosseap
cartridge provides an OpenShift compatible wrapper of the JBoss CLI tool on the gear PATH
, located at
$OPENSHIFT_JBOSSEAP_DIR/tools/jboss-cli.sh
. Use the following command to connect to the JBoss instance with the
CLI tool:
jboss-cli.sh -c --controller=${OPENSHIFT_JBOSSEAP_IP}:${OPENSHIFT_JBOSSEAP_MANAGEMENT_NATIVE_PORT}
The jbossews
cartridge provides Tomcat on OpenShift via the JBoss EWS package. This cartridge has special functionality to enable integration with OpenShift and with other cartridges. See the Cartridge Integrations and
Environment Variable Replacement Support sections for details.
webapps/ Location for built WARs (details below) src/ Example Maven source structure pom.xml Example Maven build file .openshift/ Location for OpenShift specific files config/ Location for configuration files such as server.xml action_hooks/ See the Action Hooks documentation (1) markers/ See the Markers section below
-
Action Hooks documentation
There are two options for deploying content to the Tomcat Server within OpenShift. Both options can be used together (i.e. build one archive from source and others pre-built)
Method 1 (Preferred)
You can upload your content in a Maven src structure as is this sample project and on
Git push have the application built and deployed. For this to work you’ll need your pom.xml at the
root of your repository and a maven-war-plugin like in this sample to move the output from the build
to the webapps directory. By default the warName is ROOT within pom.xml. This will cause the
webapp contents to be rendered at http://app_name-namespace.rhcloud.com/
. If you change the warName in
pom.xml
to app_name, your base url would then become http://app_name-namespace.rhcloud.com/app_name
.
Note
|
If you are building locally you’ll also want to add any output wars under webapps from the build to your .gitignore file.
|
Note
|
If you are running scaled EWS then you need an application deployed to the root context (i.e. http://app_name-namespace.rhcloud.com/) for the HAProxy load-balancer to recognize that the EWS instance is active. |
Method 2
You can commit pre-built wars into webapps
. To do this with the default repo, first run git rm -r src/ pom.xml
from the root of your repo.
Basic workflows for deploying pre-built content (each operation will require associated Git add/commit/push operations to take effect):
-
Add new zipped content and deploy it:
cp target/example.war webapps/
-
Undeploy currently deployed content:
git rm webapps/example.war
-
Replace currently deployed zipped content with a new version and deploy it:
cp target/example.war webapps/
Note
|
You can get the information in the uri above from running rhc domain show
|
If you have already committed large files to your Git repo, you rewrite or reset the history of those files in Git
to an earlier point in time and then git push --force
to apply those changes on the remote OpenShift server. A
git gc
on the remote OpenShift repo can be forced with (Note: tidy also does other cleanup including clearing log
files and tmp dirs):
rhc app tidy -a appname
Whether you choose option 1) or 2) the end result will be the application
deployed into the webapps
directory. The webapps
directory in the
Tomcat distribution is the location end users can place
their deployment content (e.g. war, ear, jar, sar files) to have it
automatically deployed into the server runtime.
The Tomcat cartridge provides several environment variables to reference for ease of use:
- OPENSHIFT_JBOSSEWS_IP
-
The IP address used to bind EWS
- OPENSHIFT_JBOSSEWS_HTTP_PORT
-
The EWS listening port
- OPENSHIFT_JBOSSEWS_JPDA_PORT
-
The EWS JPDA listening port
- JAVA_OPTS_EXT
-
Appended to JAVA_OPTS prior to invoking the Java VM.
For more information about environment variables, consult the Users Guide.
The jbossews
cart provides special environment variable replacement functionality for some of the Tomcat configuration files. For the following configuration files:
-
.openshift/config/server.xml
-
.openshift/config/context.xml
Ant-style environment replacements are supported for all OPENSHIFT_
-prefixed environment variables in the application. For example, the following replacements are valid in server.xml
:
<Connector address="${OPENSHIFT_JBOSSEWS_IP}" port="${OPENSHIFT_JBOSSEWS_HTTP_PORT}" protocol="HTTP/1.1" connectionTimeout="20000" redirectPort="8443" />
During server startup, the configuration files in the source repository are processed to replace OPENSHIFT_*
values, and the resulting processed file is copied to the live Tomcat configuration directory.
The jbossews
cart has out-of-the-box integration support with the RedHat postgresql
and mysql
cartridges. The default
context.xml
contains two basic JDBC Resource
definitions, jdbc/MySQLDS
and jdbc/PostgreSQLDS
, which will be automatically
configured to work with their respective cartridges if installed into your application.
Adding marker files to .openshift/markers
will have the following effects:
Marker | Effect |
---|---|
enable_jpda |
Will enable the JPDA socket based transport on the java virtual machine running the Tomcat server. This enables you to remotely debug code running inside Tomcat. |
skip_maven_build |
Maven build step will be skipped |
force_clean_build |
Will start the build process by removing all non-essential Maven dependencies. Any current dependencies specified in your pom.xml file will then be re-downloaded. |
hot_deploy |
Will prevent a JBoss container restart during build/deployment. Newly build archives will be re-deployed automatically by the JBoss HDScanner component. |
java7 |
Will run Tomcat with Java7 if present. If no marker is present then the baseline Java version will be used (currently Java6) |
The jenkins
cartridge provides the Jenkins continuous integration server on OpenShift.
.openshift/ Location for OpenShift specific files action_hooks/ See the Action Hooks documentation (1) markers/ See the Markers section below
-
Action Hooks documentation
Jenkins integrates with other OpenShift applications. To use start building against Jenkins, embed the jenkins-client
into an existing application. The below example will cause app myapp
to start building against Jenkins.
$ rhc cartridge add -a myapp -c jenkins-client-1
From then on, running a git push
will cause the build process to happen inside a Jenkins builder instead of inside your normal application compute space.
Benefits:
-
Archived build information
-
No application downtime during the build process
-
Failed builds do not get deployed (leaving the previous working version in place).
-
Jenkins builders have additional resources like memory and storage
-
A large community of Jenkins plugins
Building with Jenkins uses dedicated application space that can be larger then the application runtime space. Because the build happens in its own dedicated jail, the running application is not shutdown or changed in any way until after the build is a success. If it is not, the current active running application will continue to run. However, a failure in the deploy process may still leave the app partially deployed or inaccessible. During a build the following steps take place:
-
User issues a git push
-
Jenkins is notified a new push is ready.
-
A dedicated Jenkins slave (builder) is created. It can be seen by using the
rhc domain show
command. The app name will be the same as the originating app plus "bldr" tagged onto the end.NoteThis requires the first 28 chars of app name be unique or builders will be shared (can cause issues). -
Jenkins runs the build
-
Content from originating app is downloaded to the builder app through git and rsync (Git for source code and rsync for existing libraries).
-
The cartridge-specific build Shell Task is executed.
-
Jenkins archives build artifacts for later reference
-
After 15 minutes of idle time, the
build app
will be deleted and will no longer show up with therhc domain show
command. The build artifacts however, will still exist in Jenkins and can be viewed there.
Users can look at the build job by clicking on it in the Jenkins interface and going to "configure". It is the Jenkins' build job to stop, sync and start the application once a build is complete.
For a detailed overview of the OpenShift build/deploy process, consult the OpenShift Builds documentation.
Adding marker files to .openshift/markers
will have the following effects:
Marker | Effect |
---|---|
enable_debugging |
See 'Debugging Jenkins' below |
The Jenkins server can be configured to accept remote debugger connections. To enable
debugging, create a file .openshift/markers/enable_debugging
in the Jenkins app
Git repository and restart Jenkins. The debug server will listen on port 7600
for
connections.
Use SSH port forwarding to start a remote debugging session on the server.
The rhc
command is helpful for this. For example, in a sample Jenkins application
named jenkins
containing the enable_debugging
marker, the following command
will automatically enable SSH port forwarding:
$ rhc port-forward -a jenkins Checking available ports... Forwarding ports Service Connect to Forward to ==== ================ ==== ================ java 127.0.251.1:7600 => 127.0.251.1:7600 java 127.0.251.1:8080 => 127.0.251.1:8080 Press CTRL-C to terminate port forwarding
The local debugger can now be attached to 127.0.251.1:7600
.
The jenkins-client
cartridge works with the Jenkins Cartridge to provide Jenkins integration for OpenShift applications. Consult the Jenkins cartridge documentation for more information.
The mariadb
cartridge provides MariaDB on OpenShift.
The mongodb
cartridge provides MongoDB on OpenShift.
The mongodb
cartridge provides several environment variables to reference for ease of use:
- OPENSHIFT_MONGODB_DB_HOST
-
The MongoDB IP address
- OPENSHIFT_MONGODB_DB_PORT
-
The MongoDB port
- OPENSHIFT_MONGODB_DB_USERNAME
-
The MongoDB username
- OPENSHIFT_MONGODB_DB_PASSWORD
-
The MongoDB password
- OPENSHIFT_MONGODB_DB_URL
-
The MongoDB connection URL (e.g. mongodb://<username>:<password>@<hostname>:<port>/
- OPENSHIFT_MONGODB_DB_LOG_DIR
-
The path to the MongoDB log directory
Note
|
When changing the MongoDB password manually using
|
The mysql
cartridge provides [MySQL](http://www.mysql.com/) on OpenShift.
The mysql
cartridge provides several environment variables to reference for ease of use:
- OPENSHIFT_MYSQLDB_DB_HOST
-
The MySQL IP address
- OPENSHIFT_MYSQLDB_DB_PORT
-
The MySQL port
- OPENSHIFT_MYSQLDB_DB_LOG_DIR
-
The path to the MySQL log directory OPENSHIFT_MYSQL_VERSION: The version of the MySQL server OPENSHIFT_MYSQL_TIMEZONE: The MySQL server timezone OPENSHIFT_MYSQL_LOWER_CASE_TABLE_NAMES: Sets how the table names are stored and compared OPENSHIFT_MYSQL_DEFAULT_STORAGE_ENGINE: The default storage engine (table type) OPENSHIFT_MYSQL_MAX_CONNECTIONS: The maximum permitted number of simultaneous client connections OPENSHIFT_MYSQL_FT_MIN_WORD_LEN: The minimum length of the word to be included in a FULLTEXT index. OPENSHIFT_MYSQL_FT_MAX_WORD_LEN: The maximum length of the word to be included in a FULLTEXT index.
- OPENSHIFT_MYSQL_AIO
-
Controls the 'innodb_use_native_aio' setting value in case the native AIO is broken. See http://help.directadmin.com/item.php?id=529
The nodejs
cartridge provides Node.JS on OpenShift.
The cartridge provides a short list of Node.js modules by default. The list is available in $OPENSHIFT_NODEJS_DIR/versions/0.6/configuration/npm_global_module_list
.
You can also see the file versions/0.6/configuration/npm_global_module_list
under this directory.
node_modules/ Any Node modules packaged with the app (1) deplist.txt Deprecated. package.json npm package descriptor. .openshift/ Location for OpenShift specific files action_hooks/ See the Action Hooks documentation (2) markers/ See the Markers section below
-
See
node_modules
-
Action Hooks documentation
Please leave the node_modules
and .openshift
directories but feel free to
create additional directories if needed.
The node_modules
directory allows you to package any Node module on which your application depends along with your application.
If you just wish to install module(s) from the npm registry (npmjs.org), you can specify the module name(s) and versions in your application’s package.json
file.
This functionality has been deprecated and will soon go away. package.json
is the preferred method to add dependencies.
The Node.JS cartridge provides several environment variables to reference for ease of use:
- OPENSHIFT_NODEJS_IP
-
The IP address used to bind Node.js
- OPENSHIFT_NODEJS_PORT
-
The Node.js listening port
- OPENSHIFT_NODEJS_POLL_INTERVAL
-
May be set as a user environment variable to change the default of 1s
Adding marker files to .openshift/markers
will have the following effects:
Marker | Effect |
---|---|
hot_deploy |
Disable app restarting during git pushes (see 'Development Mode') |
use_npm |
This will force to run your application using NPM instead of using 'supervisor' |
When you push your code changes to OpenShift, if you want dynamic reloading
of your javascript files in "development" mode, you can either use the
hot_deploy
marker or add the following to package.json
:
"scripts": { "start": "supervisor <relative-path-from-repo-to>/server.js" },
This will run Node.JS with Supervisor.
Note
|
The |
You can also develop and test your Node application locally on your machine (workstation). In order to do this, you will need to perform some basic setup - install Node + the npm modules that OpenShift has globally installed:
-
Collect some information about the environment on OpenShift.
-
Get Node.js version information:
$ ssh $uuid@$appdns node -v
-
Get list of globally install npm modules
$ ssh $uuid@$appdns npm list -g
-
-
Ensure that an appropriate version of Node is installed locally. This depends on your application. Using the same version would be preferable in most cases but your mileage may vary with newer versions.
-
Install the versions of the Node modules you got in step 1.a. Use -g if you want to install them globally, the better alternative though is to install them in the home directory of the currently logged user on your local machine/workstation.
# pushd ~ # npm install [-g] $module_name@$version # popd
-
Once you have completed the above setup, you can then run your application locally by using any one of these commands:
node server.js npm start -d supervisor server.js
And then iterate on developing+testing your application.
The perl
cartridge provides Perl on OpenShift.
index.pl .openshift/ Location for OpenShift specific files action_hooks/ See the Action Hooks documentation (1) markers/ See the Markers section below cpan.txt List of modules to install
-
Action Hooks documentation
Due to changes in Perl cartridge template layout, the application root is now stored in $OPENSHIFT_REPO_DIR
, but is also backward compatible with deprecated perl/
directory.
Modules are installed with cpan.txt
, located in the .openshift/
directory. In addition, application dependencies can be installed using cpanfile or Makefile.PL placed in the .openshift/ folder. deplist.txt is deprecated in favor of .openshift/cpan.txt.
run/ Various run configs (like httpd pid) usr/ Perl example application template env/ Environment variables logs/ Log data (like httpd access/error logs) lib/ Various libraries bin/setup The script to setup the cartridge bin/build Default build script bin/teardown Called at cartridge destruction bin/control Init script to start/stop httpd versions/ Version data to support multiple perl versions (copied into place by setup)
Adding marker files to .openshift/markers
will have the following effects:
Marker | Effect |
---|---|
force_clean_build |
Will remove all previous perl deps and start installing required deps from scratch |
enable_cpan_tests |
Will install all the cpan packages and run their tests |
hot_deploy |
Will prevent the apache process from being restarted during build/deployment |
disable_auto_scaling |
Will prevent scalable applications from scaling up or down according to application load. |
enable_public_server_status |
Will enable server-status application path to be publicly available. |
The php
cartridge provides PHP on OpenShift.
index.php Template PHP index page .openshift/ Location for OpenShift specific files action_hooks/ See the Action Hooks documentation (1) markers/ See the Markers section below pear.txt List of pears to install (2)
-
Action Hooks documentation
-
A list of pears to install, line by line on the server. This will happen when the user git pushes.
Apache DocumentRoot, the directory that forms the main document tree visible from the web, is selected based on the existence of a common directory in the repository code in the following order:
1. php/ # for backward compatibility with OpenShift Origin v1/v2 2. public/ # Zend Framework v1/v2, Laravel, FuelPHP, Surebert etc. 3. public_html/ # Apache per-user web directories, Slim Framework etc. 4. web/ # Symfony etc. 5. www/ # Nette etc. 6. ./ # Drupal, Wordpress, CakePHP, CodeIgniter, Joomla, Kohana, PIP etc.
The following application directories, that might exist in the repository code, are added to the PHP include_path and thus automatically searched when calling require(), include() and other file I/O functions:
- lib/ - libs/ - libraries/ - src/ - misc/ - vendor/ - vendors/
Adding marker files to .openshift/markers
will have the following effects:
Marker | Effect |
---|---|
force_clean_build |
Will remove all previous deps and start installing required deps from scratch. |
hot_deploy |
Will prevent the Apache process from being restarted and will skip checking for Pear and Composer dependencies during build/deployment. |
disable_auto_scaling |
Will prevent scalable applications from scaling up or down according to application load. |
use_composer |
Will enable running |
enable_public_server_status |
Will enable server-status application path to be publicly available. |
The php
cartridge provides several environment variables to change the default PHP configuration:
Variable name | Description |
---|---|
APPLICATION_ENV |
Application mode, default value is |
OPENSHIFT_PHP_APC_ENABLED |
Whether APC op-code cache PECL is enabled, |
OPENSHIFT_PHP_APC_SHM_SIZE |
The APC shared memory size. The default value is |
OPENSHIFT_PHP_XDEBUG_ENABLED |
Whether Xdebug PECL is enabled, |
OPENSHIFT_PHP_<MODULE>_ENABLED |
Whether |
Note: You must restart the php
cartridge to pick-up the updated values.
The phpmyadmin
cartridge provides phpMyAdmin on OpenShift. In order to add this cartridge to an application, the MySQL cartridge must already be present. Once installed, phpMyAdmin can be used by navigating to http://app-domain.rhcloud.com/phpmyadmin with the MySQL login credentials.
The postgresql
cartridge provides PostgreSQL on OpenShift.
sql/ SQL data or scripts.
Note
|
Please leave sql and data directories but feel free to create additional directories if needed.
|
The postgresql
cartridge provides several environment variables to reference for ease of use:
Variable | Description |
---|---|
OPENSHIFT_POSTGRESQL_DB_HOST |
Numeric host address |
OPENSHIFT_POSTGRESQL_DB_PORT |
Port |
OPENSHIFT_POSTGRESQL_DB_USERNAME |
DB Username |
OPENSHIFT_POSTGRESQL_DB_PASSWORD |
DB Password |
OPENSHIFT_POSTGRESQL_DB_LOG_DIR |
Directory for log files |
OPENSHIFT_POSTGRESQL_DB_PID |
PID of current Postgres server |
OPENSHIFT_POSTGRESQL_DB_SOCKET_DIR |
Postgres socket location |
OPENSHIFT_POSTGRESQL_DB_URL |
Full server URL of the form "postgresql://user:password@host:port" |
OPENSHIFT_POSTGRESQL_VERSION |
PostgreSQL version in the form |
You can fine-tune the PostgreSQL server performance by using the rhc tool and changing the default values for these variables:
Variable | Description |
---|---|
OPENSHIFT_POSTGRESQL_SHARED_BUFFERS |
The shared_buffers configuration parameter determines how much memory is dedicated to PostgreSQL to use for caching data. |
OPENSHIFT_POSTGRESQL_MAX_CONNECTIONS |
max_connections sets exactly that: the maximum number of client connections allowed. |
Users can change the authentication and restrict the list of remote connection IPs by modifying the pg_hba.conf file, but doing so might be dangerous and possibly lead to broken application. Make sure you snapshot the application before doing such changes.
For more details, please refer to PostgreSQL wiki page.
The python
cartridge provides Python on OpenShift.
wsgi.py Default WSGI entry-point (1) setup.py Standard Setup Script (2) requirements.txt Standard pip requirements file (3) .openshift/ Location for OpenShift specific files action_hooks/ See the Action Hooks documentation (4) markers/ See the Markers section below
-
For backward compatibility, the
wsgi/application
path is selected as default WSGI entry-point with higher priority. You can customize the path using theOPENSHIFT_PYTHON_WSGI_APPLICATION
envirotnment variable. See the Environment variables section below. -
Adding dependencies to the
install_requires
section ofsetup.py
file will cause the cartridge to install those dependencies at git push time. -
Adding dependencies to this file will cause the cartridge to run
pip install -r requirements.txt
command at git push time. You can customize the path using theOPENSHIFT_PYTHON_REQUIREMENTS_PATH
variable. See the Environment variables section below. -
Action Hooks documentation
run/ Various run configs (like httpd pid) env/ Environment variables logs/ Log data (like httpd access/error logs) lib/ Various libraries bin/setup The script to setup the cartridge bin/build Default build script bin/teardown Called at cartridge descruction bin/control Init script to start/stop httpd versions/ Version data to support multiple python versions (copied into place by setup
Adding marker files to .openshift/markers
will have the following effects:
Marker | Effect |
---|---|
force_clean_build |
Will cause virtualenv to recreated during builds. |
hot_deploy |
Will prevent shutdown and startup of the application during builds. |
enable_public_server_status |
Will enable server-status application path to be publicly available. |
The python
cartridge supports the following environment variables:
- OPENSHIFT_PYTHON_WSGI_APPLICATION
-
Set custom path to the WSGI entry-point, eg. using the
rhc env set OPENSHIFT_PYTHON_WSGI_APPLICATION=app/altenative-wsgi.py
command. - OPENSHIFT_PYTHON_REQUIREMENTS_PATH
-
Set custom path to the pip requirements file, eg. using the
rhc env set OPENSHIFT_PYTHON_REQUIREMENTS_PATH=requirements/production.txt
command.
For some frameworks (such as Django) it is possible to set the DEBUG
user
environment variable using the rhc env set DEBUG=True
command.
In that case, Django will run in 'debug' mode, with more verbose logging and
nice error reporting of HTTP 500 errors.
tmp/ Temporary storage public/ Content (images, css, etc. available to the public) config.ru This file is used by Rack-based servers to start the application. .openshift/ Location for OpenShift specific files action_hooks/ See the Action Hooks documentation (1) markers/ See the Markers section below
-
Action Hooks documentation
OpenShift is mirroring rubygems.org at http://mirror.ops.rhcloud.com/mirror/ruby/ This mirror is on the same network as your application, and your gem download should be faster.
To use the OpenShift mirror:
-
Edit your Gemfile and replace
source 'https://rubygems.org'
with
source 'https://mirror.openshift.com/'
-
Edit your Gemfile.lock and replace
remote: https://rubygems.org/
with
remote: https://mirror.openshift.com/
There are several approaches how to speed up deployment of yours Rails application to OpenShift.
There are two options for deploying a Rails application to OpenShift.
Method 1 (Recommended)
git push
your application Gemfile/Gemfile.lock
. This will cause the remote OpenShift node to run bundle install --deployment
to download and install your dependencies. Each subsequent git push will use the previously downloaded dependencies as a starting point, so additional downloads will be a delta.
Method 2
git add
your .bundle
and vendor/bundle
directories after running bundle install --deployment
locally. Be sure to exclude any gems that have native code or ensure they can run on RHEL x86_64.
To prevent a long and unnecessary compilation of assets on application initial deployment and re-deployments, these two steps must be done.
Step 1*
It’s necessary to install sprockets
gem by adding the line gem 'turbo-sprockets-rails3'
into your Gemfile and run bundle install
.
Step 2
After sprockets
gem is installed you need to precompile all your assets locally by rake assets:precompile
, which compiles all the assets into public/assets
. When compiling the assets the sprocket gem creates a file called sources_manifest.yml
, located also in public/assets
. This manifest contains names of all assets files together with their hash values. This file ensures that only changed assets will be recompiled on re-deployment.
If your Rails application contains a large amount of migrations it’s good to use db:schema:load
on initial deploy and db:migrate
on re-deploymnets. You can do this by looking into the database and checking whether one of the DB tables exists.
This example checks, in the deploy hook, if the spree_activators
table is present in the database.
if [ echo "use $OPENSHIFT_APP_NAME; show tables" | mysql | grep spree_activators ] then bundle exec rake db:schema:load RAILS_ENV="production" else bundle exec rake db:migrate RAILS_ENV="production" fi
The ruby
cartridge provides several environment variables to reference for ease of use:
- OPENSHIFT_RUBY_LOGDIR
-
Log files go here.
- OPENSHIFT_RUBY_VERSION
-
The Ruby language version. The valid values are
1.8
and1.9
. BUNDLE_WITHOUT: Prevents Bundler from installing certain groups specified in the Gemfile.
In OpenShift you can use the Rails development environment as you do when you are developing the Rails application locally. To instruct OpenShift to deploy your application in development mode, you need to set this user-environment variable:
-
RAILS_ENV
(eg.rhc env set RAILS_ENV=development
)
When the Rails application run under development environment OpenShift will:
-
Skip the automatic static asset (re)compilation
-
Disable
bundle
command unless you do modification to the application Gemfile -
Set web server to run your application in 'development' mode
-
Skip full restart of the Apache as the code is reloaded automatically
The development mode can speed up the development phase of you application in OpenShift, but it is not recommended to use this mode for production.
OpenShift’s CLI tool, rhc
, has a subcommand threaddump
. Applications created by this cartridge respond to this command by looking
for the appropriate Rack
process, and sending ABRT
signal to it. As explained in the Passenger User Guide, this signal will dump the current thread backtraces but also terminates the processes.
Note
|
|
Adding marker files to .openshift/markers
will have the following effects:
Marker | Effect |
---|---|
force_clean_build |
Will trigger a clean re-bundle during the build cycle. |
hot_deploy |
Will prevent shutdown and startup of the application during builds. The Passenger |
disable_asset_compilation |
Will prevent assets to be compiled upon application deployment. This marker should be used when deploying application with assets which are already compiled. |
enable_public_server_status |
Will enable server-status application path to be publicly available. |