Skip to main content
System StatusContact Support
VersionOne Community

Continuum Flow Tutorial - A First Pipeline

Overview

This tutorial walks you through the process of building your first pipeline in Continuum Flow.

Initial Login to Continuum

Login to the Continuum user interface on port 8080 with your web browser. The user id will be administrator and the initial password will be password. You will be required to change the password.

Configure Command Line Tools

Once the initial password has been set, ssh to the Continuum server using the Continuum application user id.

Create a file in the user's home directory called .ctmclient.conf. Put the following json in the file, changing the secret_key to the Continuum administrator password. Change the user name (access_key) and url if applicable. The url setting in this config file is to the Continuum webserver which defaults on port 8080.

{
    "url": "http://localhost:8080",
    "access_key": "administrator",
    "secret_key": "thepassword"
}

Test the command line command authentication by running the following command:

ctm-list-pipelines

At present this list should come back empty, but it should not error. All of the command line commands are located either in $CONTINUUM_HOME/common/bin. This directory should have been added to the PATH during the install process.

Continuum Flow Purpose

Continuum's purpose is to marshal Change through the process of building, testing, bundling and packaging. Typically Change is synonymous with a commit in a source code repository.

Presently, Changes are accepted into Continuum from three sources: a Github webhook push, a Gitlab webhook push or from the Continuum Subversion Webdav poller. This tutorial will describe how to perform the initial setup of a source project to receive changes from these sources of change.

Setup a Project to Receive Changes

  1. In the Continuum application, on the right hand menu choose Flow, Manage Projects.

  2. Click Add New (on the left), name it something appropriate, such as the repository name of the repository you will be associating the project with. Give it a description if desired. Click Create.

  3. On the project edit page, on the Details tab, select a type of Source. Selecting a value in this drop down will save automatically and cause the Source tab to appear.

  4. Next, click on the Version tab. Add a version number that corresponds to the code in the repository you will be linking to. Or, just enter "1" to get started with. Click Update Version.

  5. Now click on the Source tab. For Changes From, select the type from the list (specifics on these selections to follow). Next to the Group selection, make sure Branch is selected.

Configure Change Sources

Now that we have a Project defined that can receive changes, we need to configure the source of those changes to send them in to the project. At the present time there are three possibilities: GitHub webhook push, GitLab webhook push and the Continuum Repo Poller pulling from a Subversion repository. Subversion is addressed first below, both GitHub and GitLab are addressed together after Subversion.

Setup SVN Poller

If your repository is in Subversion, you will need to setup the Continuum Repo Poller to pull changes in from your Subversion server. This requires that the Subversion Apache WebDAV extension to be running (this is typical). See http://svnbook.red-bean.com/en/1.7/s...dav.basic.html for more information on SVN WebDAV.

Here's the data that you'll need to setup the Repo Poller for Subversion: http address, user id and password with read permissions and a repository name.

Now you must get to the command line on the Continuum linux machine as the Continuum user account. Edit the file /etc/continuum/flow.yaml with an editor (vi, nano, etc.).

This file is in the YAML format. For more information on the YAML specification see here: http://www.yaml.org/spec/1.2/spec.html

Under the repo_poller_servers: section, add the Subversion server http address, user and password. This server can be named just about anything meaningful (represented by "acme_svn" in the example).

NOTE: The address needs to be in the http(s) format and should not contain the repository name in the path portion of the url. If Subversion is served using a prefix path, than that should be included, but not include the repo name.

Example:

repo_poller_servers:
    acme_svn:
        address: http://svn.internal.acme.com
        user: svnuser
        password: svnpassword

Notice the indentation. The name of the server must be indented below servers and address, user and password must be indented below the server. This is standard YAML and is usually either 2, 3 or 4 spaces.

Next thing to do is define the repository on that server under the repo_poller_repos:. Use the following example. The first line under the initial section header is the name of the Subversion repository on the server. The poll setting turns the poller on and off for this repository. The server_name is the name of the server setup in the previous section. Type must equal svn_webdav. There will be other types for polling git, etc. in the future. Lastly, the project setting should match the Continuum Project setup in the first part of this tutorial.

Example:

repo_poller_repos:
    petclinic:
        poll: true
        server_name: acme_svn
        type: svn_webdav
        project: petclinic

Save and exit the flow.yaml file.

Last thing to do is to turn on the repo poller. This service is not enabled by default. Edit the /etc/continuum/continuum.conf file. Uncomment the following line by removing the # and space following the #.

# service.flow ctm-repopoller

Should become...

service.flow ctm-repopoller

And start the service by running the following command. This will only start services that are not currently running.

ctm-start-services

The repo poller will start up, read the flow.yaml file and immediately attempt to connect to the repository and determine the last change id (aka revision number). The poller will store this number so that next time it polls the repository it will only retrieve the commit logs of revision numbers higher than the last one found.

The repo poller will check for new changes every 15 seconds. This will be configurable in a future release.

To determine initially if the repo poller is successfully connecting to the repository, view the logfile /var/continuum/log/ctm-repopoller.log.

A successful connection to the Subversion repository will look something like this:

2014-12-03 10:19:00,138 - cskrepopoller.cskrepopoller - INFO :: Initial attempt to connect to svn repo petclinic

2014-12-03 10:19:00,360 - cskrepopoller.cskrepopoller - INFO :: Svn repo petclinic last change id is 25, saving change id for next time

A failure to connect to the Subversion server may have the following in the log (check address, protocol, etc.):

Connection error attemtping to communicate with SVN server. Check http or https, server address and port
HTTPConnectionPool(host='54.84.72.197', port=80): Max retries exceeded with url: /svn/petclinic (Caused by : [Errno 60] Operation timed out)

or

HTTP error, connection established but the SVN server responded with an http error code. Possibly wrong builder or build number
404 Client Error: Not Found

Authorization issue (user id or password):

HTTP error, connection established but the SVN server responded with an http error code. Possibly wrong builder or build number
401 Client Error: Authorization Required

Wrong repository name:

HTTP error, connection established but the SVN server responded with an http error code. Possibly wrong builder or build number
500 Server Error: Internal Server Error

Setup Github / Gitlab Webhook

GitHub and GitLab have the capability to post json formatted data on commit pushes. These are called webhooks. This section describes how to setup these webhooks.

First in the Continuum application you must get an authentication token for the webhook to use. Login to Continuum and on the right hand menu select Administration. On the upper menu select Users. You can either create a new user account specifically for webhook purposes or use an existing user. Once the user is selected, on the user edit page click on the Token tab. Copy the token string for later user.

Next login to either the GitHub or GitLab user interface and find the webhook setting for the repository you wish to connect to Continuum.

GitHub doc: https://developer.github.com/webhooks/creating/

For GitHub make sure Content type is application/json and Just the push event is selected.

 

The GitLab documentation is a little bit sparse on where to access the webhook setting for a repository. Go to the Project page for the repository then select settings. On the left hand Menu you will find Web Hooks.

For both GitHub and GitLab, the URL form is as follows:

http://address:8080/api/submit_change?project=petclinic&token=tokenstring

Where address is the Continuum server address. The /api url is the Continuum REST API webservice. Change the project in the URL to the Continuum project created previously Finally paste in the API token from the Continuum user page.

Test a Change

Now it is time to test that changes are getting pushed or pulled into Continuum. In your local copy of the repository, make a change (add, change, delete a file) and push the commit to the central repository server. The change should get in to Continuum pretty soon. To view the change in Continuum, in the upper menu select Projects. On the Activity screen you should see your new project highlighted in yellow. The yellow mean that this project has not had the changes run through a pipeline yet.

Click on the project name, then on the Version tab on the left. Expand the Changes section. If everything went fine, you should see the change that came in listed here. Clicking on that row will show additional details of the change including who made the commit, the log and a list of the files involved.

If everything went ok, then skip to the next section. Otherwise let's troubleshoot.

Troubleshooting

If using the Repo Poller service, the first thing to do it check the /var/continuum/log/ctm-repopoller.log file and look for errors in the logfile. Refer to the instructions in the repo poller section above.

If using webhooks the place to look will be in the REST API service log. This is located in the file /var/continuum/log/ctm-restapi.log. Look for any errors that might have occurred.

Another place to look will be on the webhook page for the GitHub repository. Each webhook submission will produce a status as well as a message. With GitHub you can resubmit webhook submissions until you get it right.

Most problems with webhook submission will be related to wrong project names, a blocked port through a firewall or a wrong token string.

Pipelines

Hopefully by this point you will have changes coming into Continuum and they are getting associated with the Project. However to start moving Change towards deliverable software we need to get Pipelines started.

A Pipeline is an ordered path of both automation and possible manual tasks designed to turn software changes into a deliverable product. Pipeline structures or definitions are defined in the Continuum user interface and can be fed or triggered by multiple projects. Pipelines can then feed change downstream and can bundle changes from other projects into an Integration Pipeline.

Pipelines are made up of Phases which will run serially. Within a Phase are Stages which will run in parallel with each other. Within Stages are Steps which run serially again. Steps represent automation, such as triggering a CI tool, evaluating results, deploying artifacts to servers, triggering Continuum Tasks, performing testing or updating a third party issue tracking system. A step can even be a manual work event.

The following is a simple representation of a Pipeline. Changes, Work Items and possibly Artifacts (more on Changes and Work Items later) come into the Pipeline as inputs. Phases run one at a time until the Pipeline is complete. Those same Changes and Work Items flow out of the Pipeline, possibly to another Pipeline. One difference may be that the output Artifacts may be different from the input Artifacts in the case of bundling of software.

Automation will be separated into different pipelines for a number of reasons. First may be the case where different sources of change (e.g. repositories) feed into Pipelines that may do builds and unit testing. Other pipelines will represent integration of those sources and also may have manual decisions points between them.

Each pipeline will run on it's own cadence or timing. For instance the build pipelines will run much more frequently than the integration pipelines, which will be more frequent than the QA pipeline, etc. etc. Of course this rule of thumb is not a rule, one of the build pipelines may run much less frequently than the integration pipeline simply because change does not occur very often for that repository.

Changes accumulate as pipelines are waiting to be run. Therefore each time an upstream pipeline runs, it passes the record of the Change down to the next pipeline where the Changes stack up. When the downstream pipeline runs and completes successfully, all that stacked up Change moves to the next Pipeline where it stacks up again.

A very simple example of multiple source Pipelines integrating and then moving to later Pipelines.

Create a Sample Pipeline Definition

Back to the tutorial, let's create a simple pipeline so that we can associate change and trigger it. Pipeline Definitions are used to model the workflow for a particular Pipeline.

In the Continuum user interface, select Flow -> Manage Pipelines from the menu in the upper right. This is the Pipeline Editor.

Under Stage Library select Add New. Name the Stage "Example Stage" and click Create. The Stage editor will appear. Now click the Plugins tab on the upper left. Briefly click through the available Plugins. The list of functions immediately to the right will change as you click through. More on these later.

NOTE: the Pipeline Editor automatically saves on each change. Be extra careful when working on Pipelines that are in use.

When ready, click Flow. Select Utility - Log and drag it over to the right where it says Drop Here. You have just added a Step to the Stage. Step names are totally optional, simply a way to distinguish on Step from another. Add some text to the Message box. "Hello World" will work.

Return back to the main Pipeline Editor page (hint: blue button upper right!). Select Add New under the Phase Library. Give it a name such as "Example Phase" and Create. Click the Stage Library tab on the left. Drag the Stage previously created over to the right and drop it. Return to the main Pipeline Editor screen.

Create a new Definition and name it something like "Example Pipeline". Add the previously created Phase to the Pipeline. Remember what you named the Pipeline, we'll need it later.

Link the Project to the Pipeline Definition

Return back to Manage Projects (upper right menu, Flow...) and select the Project created in previous steps. On the Source tab add a new Directive. Choose Initiate Pipeline.

Leave the When selection set to Always. This mean that when Changes come in to the project we always want this Pipeline to be assigned the Changes and to be triggered.

Set Definition to "Example Pipeline" and Group to "[$ branch $]" (without the quotes).

Notice the "[$ ... $]" syntax. This denotes variable substitution. More on this later.

What we have done is setup a Directive that states when any change comes in associated with this Project, start (initiate) a Pipeline using the Pipeline Definition "Example Pipeline" and group the Pipeline Instances by branch name. The Pipeline Instances will also get associated with the current Project though there is a way to trigger a Pipeline on another Project which is out of scope of this tutorial.

NOTE: when making changes to data in Continuum, you must exit the field either by tabbing out or clicking somewhere else on the screen. This will trigger the auto-save. If the cursor stays in the field, the change may not save.

Triggering a Pipeline

We should be ready to go if all the steps in this tutorial have been successfully complete. Return to your local repository, make another change and commit / push it.

In Continuum you view this latest change by returning to the Project reporting page by selecting project on the top menu, then selecting your project name from the list, Version, then Changes. Click on the change to popup details. Above the Commit Details there should be a Flow section with a Project and Group. This is the Pipeline that ran when the Change was committed. Clicking on that bubble will take you to the Pipeline Instance page.

Viewing the Pipeline Instance

On the Pipeline Instance page, you will find a graphical representation of the Pipeline Definition from Phases to Stages to individual Steps. Click on the "Example Phase" box to expand it to show the status of each Stage and Step. In this case we only have one each so it's not much to look at. However if this was a more complex Pipeline Definition there would be several Phases and Stages each with their own status.

Next check out the Progress Log in the upper right. Here is where you should see your "hello world" log message.

Take a look at the Manifest (left side). This is where you see all the Changes, Work Items and Artifacts (in and out) that are associated with this Pipeline Instance run. We don't yet have Work Items or Artifacts so you will only see a single Change.

The Data tab displays all the raw data that is associated with the Pipeline Instance. This is an under-the-covers view of any data that is reportable for this instance. The more plugins and interfaces with third party systems, the more interesting this data is. The data document can also be used as a reference for populating variables to pass into other systems (e.g. passing the branch and commit id to Jenkins).

Associating Change with Work Items

Work Items are representations of issues or projects in an external tracking system. They could be bug tickets, user stories, enhancements, etc. When Changes come into Continuum through the interfaces mention above, they can be automatically associated with the Work Items and the Work Items can be pulled into Continuum and tracked as they move through dev, test and into production.

Currently the two tracking systems that Continuum supports are Version One and Jira. For this tutorial we will focus on integrating with Jira.

NOTE: if your organization does not use Jira, please contact support@versionone.com and let us know what tracking solution you use. Please include the edition (if applicable) and version as well. Feel free to skip the Jira section.

Jira Plugin Setup

For Continuum to communicate with the Jira web service we must first setup the connectivity. Return to the command line of the Continuum server and edit the /etc/continuum/flow.yaml file.

Underneath the plugins: section there should be a subsection called jiraplugin. The Jira plugin needs url, user and password to be filled out (token / OAuth authentication not supported at this time). The user will need the proper permissions to be able to query the API and the projects that are necessary. The url will need to include the protocol and port if not port 80 or 443 (https).

Example:

plugins:
    jiraplugin:
        user: username
        password: userspassword
        url: https:/jira.acme.com:8082

Save and exit the file.

Setup the Project to look for Work Items in Commit Messages

A best development practice is to have all source repository commits associated with a ticket in an issue or planning management system. This is usually done by putting the ticket or issue identifier in the commit message with some sort of flag or pattern that sets it apart from the rest of the log message. For Jira this is usually a three or four letter abbreviation of the project name followed by a dash and an auto incrementing integer. In Continuum we can setup the project to parse the commit log messages as they come in and find the issue ids based on regular expressions. This will create a Work Item in Continuum and associate the Changes with it.

Return to Project Management page (right hand menu, Flow, Manage Projects) and select your Project. Go to the Source tab.

Add a new Directive. This will add it below any other in the list. We want it to be first in the list so grab the Directive in the upper left and drag it up to the top of the list.

Make the Action Plugin Function, When Always. In the Plugin box put jiraplugin.issue and the Method identify_issues. For Args, put the following Json string.

{"expression": "[A-Z]{4}-[0-9]{1,}","fields": ["message"]}

The expression value is the regular expression that will be used to determine if a Work Item can be found. This particular expression looks for 4 upper case letters followed by a dash and then followed by at least 1 number. This regular expression is very flexible in that if a commit message references issues in other Jira projects it will pull those in. However if this is not necessary in your environment you could replace the first part of the expression with the actual Jira project abbreviation and leave the rest.

Make a New Commit

Now return to your local source and commit another change. Return to the Versions tab on the Project Detail page, you now see a Work Item. Expand and click on the Work Item.

Adding in Continuous Integration

The tracking of Changes and Work Items into Projects and Pipelines is interesting but there are many other things that need to occur during a software delivery pipeline. One of the very first things that may need to be done is interacting with a continuous integration application for performing software builds.

In this example, we will setup Continuum to interact with Jenkins, trigger a job and report on the results.

Jenkins Plugin Setup

Return to the Continuum server command line and edit the /etc/continuum/flow.yaml file. Add the jenkins section as follows:

plugins:
    jenkins:
        url: http://jenkins.yourcompany.com:8080
        user: bobthebuilder
        password: password

Enter the url of the Jenkins server where you would normally log in. Sometimes the url has a /jenkins or some other path on the end. The port should be included unless it is port 80. Make sure http or https is set properly.

If the Jenkins server is setup without authentication, don't include the user and password lines in the config.

Create a Test Job

For purposes of testing and running through the tutorial without impacting a valid Jenkins job, it is recommended that you create a Jenkins job that simply has an Execute Shell build step in it and echo "hello world". Something along those lines.

Add Jenkins Build Job to Pipeline

Enter the Pipeline Definition editor and select the previously created Stage. Under Plugins, select Jenkins, Build. Drag it under the Log step. Enter the Jenkins job name in the Job box.

Perform another commit. You should see the Jenkins job get kicked off. With the Wait option set to true, the pipeline will wait until the Jenkins job completes.

Add Parameters to the Job

Edit the Jenkins job and add a parameter to the job, call it "PARAM1". In the Execute Shell step on the job, change "hello world" to hello $PARAM1".

On the Jenkins Build step in the Pipeline Editor, click Add next to Additional Parameters and put the following Json in the box:

{"params": {"PARAM1": " world"}}

Perform another commit. You should see the Jenkins job kick off and the parameter get passed to the job.

Wrapping Up

Up to this point we have a very simple, single pipeline that is interfaced with Github, GitLab or Subversion, Jira and Jenkins. This is only the beginnings of a true continuously deliverable development pipeline. However this should give you an understand of the basics of configuring and using Continuum Flow.

Subsequent tutorials and instructions will focus on mining Pipeline Instance data for use in variables, passing commit and branch information to Jenkins, identifying artifacts and deploying to testing servers.

  • Was this article helpful?