Skip to main content
System StatusContact Support

Documentation related to the following products will soon be moved to a new portal: ( Agility, Agility Connect and Agility Integrations Continuum and ALM Connect
Links from the site will automatically redirect to the new site.
If you have any questions, please contact Support. Agility Community

Continuum Glossary of Terms

This feature is available in Ultimate edition only.



Before diving in to the glossary, a primer will be most helpful. In Continuum, everything is centered around a single objective - work item centric tracking of changes on their journey from inception to publication.

Every implementation of Continuous Delivery is different, but all have the same general principles and (usually) terminology. In some cases, terms are very clear. In others, terms can be ambiguous at best, and totally dependent on context.

So, first and foremost, since tracking Work Items and their associated Changes is our goal, we must identify and record the lifecycle of both Work Items and Changes.

Continuum Automate Terms


An Asset can be a virtual machine in a cloud, a physical machine in a datacenter, or a software instance or service on one of these. For example, a Linux server, an instance of an Oracle Database, a Web server, etc. all are examples of Assets. An Asset represents an entity on which Tasks can perform work.


Clouds are definitions of the details of a virtual environment. Different providers may use different terminology to describe this concept: in Amazon AWS, clouds are "Regions", and cannot be changed by users. In private clouds such as Eucalyptus, Clouds are the different Eucalyptus environments you may have set up.

In a nutshell, a Cloud is the "endpoint" to which Automate or other tools will connect and interact.

Cloud Account

A set of credentials necessary to interact with a cloud provider API. For example, in AWS this is the Access Key and Secret Key.


A Codeblock is a collection of Steps. In procedural programming terms, a Codeblock can be though of as a function, or method.


A Command is directly tied to a Task Step. A command defines the detail of what a particular Step will do - issue a cloud API call, execute a command line statement, or perform a database query.


The Messenger is a notification component, responsible for managing the Automate message queue.


Parameters are variable data that can optionally be made available to a Task, and can be changed after a Task has become approved.


The Poller is responsible for managing Task Instances - starting, monitoring, and canceling.


The REST API component answers HTTP requests for interacting with via API calls or command line tools.


The Scheduler is responsible for reading all schedules and maintaining the proper timing and queues for each Task.


Automate includes several services/processes/daemons.

Shared Credential

A set of credentials necessary to authenticate and connect to an Asset. Shared Credentials are used by more than one Asset.


A Task workflow is built of one or more Steps. A Step represents a single unit of work in the flow of a Task.


A Task is an automation routine that Automate executes to actually perform work on resources in the cloud or on the local network. Tasks can interact with clouds and can leverage the full APIs offered by the providers. Within the cloud or on the local network, Tasks can interact with virtually any software system: operating systems, databases, applications, etc.

Task Engine

The Task Engine is the process that performs the actual work of a Task.

Task Instance

When a Task is executed, it creates a Task Instance. This instance is the record of when a Task ran, who started it, and exactly what it did.


The UI is the web server component that serves the User Interface.

Continuum Flow Terms


In Continuum, the journey through a pipeline is mostly automated work, but is often always additionally subjected to manual attention.

To support this, certain Plugins implement manual interaction functions. These are commonly referred to as Approvals or Manual Interactions.

The purpose of an Interaction is simple - when encountered, the Pipeline Instance will sit in a pending state, waiting for a user to either Approve or Deny work progressing to the next Stage.


There are various mechanisms to do so, but when new code is ready to be evaluated, it's said to Initiate a new Pipeline Instance into that pipeline.

In most cases, a new Pipeline Instance is initiated via a trigger from source control, such as a GitHub 'web hook'. Initiate can also happen from an API call, or via the command line tools.


In Continuum, a Manifest can appear attached to many different records. The Manifest is simply the collection of Work Items and Changes affiliated with the object being viewed. So, when looking at an Instance for example, it's Manifest will be the payload of changes and work items fed into that Instance.


Pipeline Instances generate a wealth of useful Metrics. Many of these metrics are presented on the built-in dashboards and pages, and many more are in the database and available for custom reporting.

Like Plugins, Metrics are defined using an open architecture, and can be added at any time. Useful metrics contributed back to Continuum will become officially supported, maintained, and included in future versions.


A Project is a grouping and reporting mechanism in Continuum. History, metrics, trending details, and logging information are all associated with and can be viewed by Project.

While Project names can be arbitrary, in most cases, customers create a Project for each source code repository containing code that will be managed.

Regarding 'Pipelines'

Pipeline is a term that can be interpreted in many different ways depending on the context.

Generally speaking, a pipeline can mean many things: a conduit through which a product (oil, gas, software, etc) flows from a source to a destination, a workflow (a path to achieve an end), or a backlog (as in a sales pipeline).

In order to help better separate these definitions and add clarity, in Continuum we commonly refer to these items more specifically as follows:

Pipeline Definition

A Pipeline Definition is the definition of a series of delivery Phases through which changes will pass. A Pipeline Definition is essentially an automation workflow.

Pipeline Group

A Pipeline Group (often simply called a Group) is a unique parent record comprised of a Project + Definition + Group identifier. A Pipeline Group is exactly that - a grouping of Instances.

When a new Pipeline Instance is created, the initiate command requires three arguments: Definition, Project and Group. In the majority of cases:

  • Definition = The Pipeline Definition
  • Project = The repository containing the code being evaluated
  • Group = A label to apply as a grouping mechanism, most often a repository branch

In most cases, our customers intuitively use the repository name as the Project, and the branch name as the Group.

Pipeline Groups are not created or managed manually - they are created when a new combination of Definition + Project + Group identifier is initiated.

Pipeline Instance

A Pipeline Instance is a record in Continuum that represents work performed on a distinct change set to a managed code base, and motivated by one or more work items. Less generically - a Pipeline Instance very often represents a single commit to a source control repository. (Whether that commit represents a single change, or is a merge containing many changes, is defined by your unique circumstances and configuration.)

A new Pipeline Instance is created when a pipeline is initiated. Rules for initiating a pipeline are again dependent on your specific configuration and workflow design.


A Phase is a collection of work a change will be subjected to as it flows through a pipeline.

A Pipeline is simply a collection of Phases. Phases are synchronous - any exception or error in any Phase and the pipeline will stop.

Phases in and of themselves have few properties beyond their name and association with certain metrics. Their primary purpose is to contain one or more Stages.


In Continuum, a Plugin is where the heavy lifting occurs - it performs the actual work in a Stage.

Plugin is a commonly used term, often used when a software provider pushes off something and hopes the community will pick up the slack. Our approach is different - while we made it an open architecture on purpose to encourage community contributions - all delivered plugins are maintained and supported, and included in each release of Continuum. Included plugins won't break with a new release, won't go stale, and don't need to be updated separately.

Useful plugins contributed back to Continuum will become officially supported, maintained, and included in future versions.


A Stage is a smaller unit of work organization. Where Phases are a single-threaded list of steps in a Pipeline, many Stages can be added to a single Phase and will execute in parallel.

This provides a powerful mechanism for optimization of work. For example, a 'provision' phase that provisions three separate servers can do all three in parallel, significantly cutting down on overall pipeline time.


Further refining the design of a pipeline, a Stage is composed of one or more synchronous Steps. Steps have a descriptive label and conditionality clauses for execution. The actual work performed in a Step is done via a Plugin.


Whether pushed via a web hook or polled, when changes are detected in a repository, they are introduced to Continuum as a Submission. The Submission is the first look Continuum gets at a Change. Submissions are made to a specific Project, and then additional rules are evaluated to decide how to proceed with the change payload.