A blueprint models an application stack in a specific configuration. Blueprints typically include applications and cloud services. Each blueprint is defined in a dedicated YAML file that resides in the /blueprints folder within the GitHub/BitBucket repository that is associated with the space. For details, see Setting up a Blueprints Repository.
In this article:
- Example of a blueprint YAML
- Basic Attributes
- Metadata
- Clouds
- Applications
- Services
- Artifacts
- Debugging
- Infrastructure
- Ingress
- Inputs
Example of a blueprint YAML
TIP: For more information about each sequence or key-value pair, click the icon next to it.
spec_version: 1 kind: blueprint
environmentType: sandbox metadata:
description: > A Java Spring website deployed on a Tomcat server and MySQL Aurora serverless cluster clouds:
- AWS: eu-west-1 artifacts:
- java-spring-website: latest/colony-java-spring-sample-1.0.0-BUILD-SNAPSHOT.war inputs:
- DB_USER: root - DB_PASS: display_style: masked description: > please set the root database password default_value: Colony!123 - DB_NAME: demo_db applications:
- java-spring-website: instances: 1 input_values: - DB_USER: $DB_USER - DB_PASS: $DB_PASS - DB_NAME: $DB_NAME - DB_HOSTNAME: $colony.applications.rds-mysql-aurora-cluster.hostname depends_on:
- rds-mysql-aurora-cluster services:
- rds-mysql-aurora-cluster: input_values: - DB_NAME: $DB_NAME - DB_USER: $DB_USER - DB_PASS: $DB_PASS - CLUSTER_MIN_CAPACITY: 2 - CLUSTER_MAX_CAPACITY: 4 - SANDBOX_ID: $colony.environment.id - VPC_ID: $colony.environment.virtual_network_id debugging:
availability: on
The blueprint YAML consists of multiple sections that are detailed below. In order to be functional, the blueprint YAML must include the following:
- Basic YAML attributes
- Cloud definition
- At least one application defined
Basic Attributes
kind: application
spec_version: 1
Field | Value | Description | Supports inputs? |
spec_version | 1 | (Mandatory) YAML spec version. Currently, version 1 is the latest available. | No |
kind | blueprint | (Mandatory) Options are “application”, “blueprint” or “service”. In this case, specify “blueprint”. | No |
Metadata
Here you can define the blueprint's description. The description is visible to users when they launch a sandbox and select the blueprint from the catalog.
metadata:
description: {description}
Field | Value | Description | Supports inputs? |
description | string | (Optional) This description is displayed to users when they launch a sandbox and select the blueprint from the catalog. | No |
Clouds
In this section you define the cloud account or Kubernetes compute service in which the environment will be deployed. For the blueprint to be valid, all the applications that the blueprint uses should have a source defined for the blueprint’s cloud provider type, in the specified region.
NOTE: Currently, it is possible to specify only one cloud account or Kubernetes compute service in the blueprint. Multiple regions per cloud account are not supported.
Format:
For cloud accounts AWS and Azure:
clouds:
- {cloud_account_name}: {region_name}
For Kubernetes compute services:
clouds:
- {cloud_account_name}: {kubernetes-compute-service-name}
Examples:
clouds:
- aws1: eu-west-1
clouds:
- azure-staging: westeurope
clouds:
- aws1/kubernetes-testing
Applications
In this section of the blueprint YAML, you declare the applications that are part of this blueprint, provide information about the instances they will be deployed on, define dependencies between the applications and services, and pass input values to the applications. To learn how to specify the dependencies between services and/or applications and the dependency hierarchy, see Specifying deployment order and dependencies.
The applications are defined in their own YAML files. The YAMLs should reside in a dedicated /applications folder located in the same repository as the blueprint YAML. For more information see The Application YAML File.
applications:
- {app_name1}:
instances: {instances_number}
target: {target_instance}
input_values:
- {input_name}: {input_value}
depends_on:
- {app_or_service_name}
NOTE: It is possible to deploy an application either on its own on one or more instances (“instances”), or in a shared instance together with other applications (“target”). To learn how to map your applications to the required compute instances, see Mapping Applications to Compute Instances.
Field | Value | Description | Supports inputs? |
app name | string |
(Mandatory) The name of the application. An application folder and YAML with this name should reside in the /applications folder in the blueprint YAML’s repository. For example: /applications/acme_web_server/acme_web_server.yaml |
No |
instances | Numeric, >=1 | (Optional) If you want the application to be deployed on its own, specify the number of instances (one or more). | Yes |
target | string | (Optional) Provide an alias for the instance on which you want the application to be deployed. Use the same alias for other applications that will share the same instance. | No |
input_values | string |
(Optional) The name and value of the application input. The value you wish to pass to the application input can be (a) a hardcoded value, (b) a value that comes from a blueprint input in the format ${input_name} or (c) an output of another application or service in the format: |
input_name: No input_value: Yes |
depends_on | string | (Optional) The name of the application(s) and/or service(s) this application depends on. This means that the deployment of this application will start only after all its dependencies have completed deployment and their healthcheck script has successfully run. | No |
Example:
applications:
- java-spring-website:
instances: 1
input_values:
- DB_USER: $DB_USER
- DB_PASS: $DB_PASS
- DB_NAME: $DB_NAME
- DB_HOSTNAME: $colony.applications.rds-mysql-aurora-cluster.hostname
depends_on:
- rds-mysql-aurora-cluster
Redirecting traffic to certain applications using Ingress rules
Advanced scenarios call for a more flexible configuration of your sandbox environment’s application load balancer. For example, in cases where your sandbox environment includes multiple applications and you need to redirect the traffic to certain applications based on a URL path.
Such flexibility can be added to your sandbox environment by adding Ingress rules to your blueprint. For details, see Ingress.
Services
In this section of the blueprint YAML, you declare the services that are part of this blueprint, define application and service dependencies, and pass input values to the services. Each service is defined in a dedicated service YAML file that should reside in a /services folder that is located in the same repository as the blueprint YAML. To learn how to specify the dependencies between services and/or applications and the dependency hierarchy, see Specifying deployment order and dependencies.
NOTE: Colony currently supports services based on Terraform. For more information see The Service YAML File (Modeling Cloud Services with Terraform).
services:
- {service_name}:
input_values:
- {input_name}: {input_value}
Field | Value | Description | Supports inputs? |
service name | string |
(Mandatory) The name of the service. A service folder and YAML with this name should reside in the /services folder in the blueprint YAML’s repository. For the blueprint to be valid, for example, the space repository must contain at least the following structure: /services/rds_db/rds_db_service.yaml |
No |
input_values | string |
(Optional) The name of the service input. The value you wish to pass to the application input can be either (a) a hardcoded value, (b) a value that comes from a blueprint input in the format ${input_name} or (c) an output of another application or service in the format $colony.services.service_name.outputs.output_name. |
input_name: No input_value: Yes |
depends_on |
string | (Optional) The name of the application(s) and/or service(s) this service depends on. This means that the deployment of this service will start only after all its dependencies have completed deployment and their healthcheck script has successfully run. | No |
Artifacts
Deploying an application might require using certain artifacts. For example, deploying a new version of your software using artifacts when testing a new version of your code.
Colony integrates with common artifact repository providers such as Azure Storage, AWS S3 and JFrog Artifactory.
Artifacts are defined in the blueprint YAML and exposed as inputs to the user who launches the blueprint. You can optionally provide a default value for the artifact’s path or keep it empty making the input mandatory for the end user to fill in. For details about defining artifacts in Colony, see Adding Artifacts to your Blueprint.
artifacts:
- {application_name}: '{artifact_path}'
Colony pulls these artifacts from your storage provider into your artifacts folder on the application’s compute instance(s).
Field | Value | Description | Supports inputs? |
application name | string | (Optional) The name of the application that requires an artifact. In order to use multiple artifacts in a single application, they should be compressed into a single file on the storage provider. | No |
artifact path | string |
(Mandatory) The artifact path, relative to the root of the artifact repository in the space. The artifact path is exposed as an input to the user who launches the blueprint. NOTE: To use multiple artifacts in a single application, you must compress the artifacts into a single file on your storage provider. |
Yes |
Example:
artifacts:
- demoapp-server: ' demoapp-server/production/demoapp-server.tar.gz'
Defining artifacts to the application's initialization scripts
Make sure to properly define a parameter to hold the path to your artifacts folder on your compute instance. To deploy the artifacts, in your application's initialization script, add access to the artifact folder using the environment variable that holds the folder path.
The following is a simple script that extracts the artifact file demoapp-server.tar.gz from the artifact folder located in $ARTIFACT_PATH.
cd $ARTIFACTS_PATH;
tar -xvf demoapp-server.tar.gz;
To learn more about defining and working with parameters in blueprints and applications, see Working with Parameters.
Debugging
To allow debugging of your application, Colony provides the option to connect to the VM(s) on which the application is deployed.
There are two connection methods:
- Bastion
- Direct access via RDP/SSH client file
Configuring this option is done in the blueprint and application YAMLs. The blueprint YAML defines the connection method to use and the application YAML is where you define the connection protocol.
NOTE: You can have both bastion and direct access defined in the environment, where some applications allow direct access and others are accessed using Bastion.
Bastion
Bastion is a component that is deployed in the sandbox cloud environment and provides a more secure connection. It is deployed on a dedicated virtual machine and therefore also incurs a cost, although it’s designed to minimize costs while maximizing your troubleshooting efficiency.
By default, the Bastion feature is enabled, and the Bastion instance is deployed and then powered off. You can turn it on and off via the Troubleshooting tab in your sandbox page. You can also control the default Bastion behavior in the blueprint YAML.
debugging:
bastion_availability: {availability_mode}
Field | Value | Description | Supports inputs? |
bastion_availability | enabled-on|enabled-off|disabled |
(Mandatory) Bastion's default state in the sandbox:
|
No |
Direct access
In the direct access method, the application’s virtual machine receives a public IP if the environment is internet facing, and a private IP if the environment is internal. So if the environment is internet facing, the virtual machine will be open to the internet over the appropriate port, which is 22 for SSH and 3389 for RDP. And if the environment is internal, the virtual machine will use a private IP.
To access the virtual machine with the direct access method, the sandbox end-user will need to run the session using an SSH or RDP client from either command-line or the UI.
debugging:
direct_access: on
Field | Values | Description | Supports inputs? |
direct_access |
on|off |
Enables direct_access connections to the VM via an RDP or SSH client. | No |
Infrastructure
Colony supports two deployment modes for an environment – dedicated and shared. In dedicated mode, Colony creates all the cloud infrastructure such as virtual network (VPC or VNET), subnets, security groups etc. specifically for the sandbox, and when the sandbox duration ends all of this cloud infrastructure is deleted. In shared mode, the administrator specifies, at the space level, the details of an existing virtual network and subnets as well as the cloud region to use, and Colony will deploy the sandbox's infrastructure in accordance.
The above is true for sandbox (pre-production) environments. For production environments, Colony supports only the dedicated mode, in which all the infrastructure already exists in the cloud account and is provided to Colony in the production blueprint YAML.
The cloud infrastructure is defined in the production blueprint YAML as follows:
infrastructure:
green_host: green.lv-colony-prod.com
connectivity:
virtual_network:
id: {virtual network id}
subnets:
gateway:
- {subnet id}
management:
- {subnet id}
application:
- {subnet id}
- {subnet id}
Field | Value | Description | Supports inputs? |
virtual_network: id | string |
(Mandatory) AWS – VPC Id Azure – VNET name |
Yes |
gateway: subnet_id | string | ID of the subnet in which the application gateway is installed. | Yes |
management: subnet_id | string | ID of the subnet in which the sandbox's management infrastructure is installed. | Yes |
application: subnet_id | string | ID of the subnet(s) in which the sandbox's applications are deployed. | Yes |
In Azure, there should be a VNET prepared in advance, containing 1 empty subnet for the application gateway, 1 subnet for management and at least 1 subnet for applications. In AWS, there should be a VPC prepared in advance, containing 1 subnet for management and at least 2 subnets for application.
Using an existing Application Load Balancer
For AWS deployments, it is possible to define an existing stack to be used for the environment infrastructure, which already contains an ALB. If such a stack is defined in the production blueprint YAML, the stack will not be automatically deleted when the production environment is terminated, and it can be re-used with the same ALB.
The stack is defined as follows:
infrastructure:
stack: '{stack name}'
Example:
infrastructure:
stack: 'demo-alb-internal'
Field | Value | Description | Supports inputs? |
Stack | string | (Optional)
Name of the infrastructure stack. AWS – VPC Id Azure – VNET name |
Yes |
Ingress
An Application Load Balancer (ALB in AWS) or Application Gateway (AG in Azure) is deployed, as part of an environment in Colony, to handle any external communication to the environment.
However, it is possible to define listeners that will be configured on the ALB/AG. These listeners define the external ports of the sandbox and the rules that connect them to the sandbox's applications. A listener can be of type HTTP or HTTPS, followed by the port number to open. For each listener, you can define a list of rules that determine when the traffic is forwarded to different applications. The rules can be based on a path, host header or both and are applied in the order they appear in the ingress section, from top to bottom.
This is done as follows:
ingress:
listeners:
- {listener type}: {listener port}
redirect_to_listener: {listener port}
certificate: {certificate}
rules:
- path: {path}
host: {host}
application: {application name}
port: {application port}
color: {environment color}
shortcut: {shortcut}
default: {default}
Example for sandbox environments:
ingress:
listeners:
- http: 80
redirect_to_listener: 443
- https: 443
certificate: $CERT_ARN
rules:
- path: /api/*
application: nectar-api
port: 3001
- default: true
application: nectar-web
port: 3000
Example for production environments:
ingress:
listeners:
- http: 80
redirect_to_listener: 443
- https: 443
certificate: $CERT_ARN
rules:
- path: /api/*
host: green.sometest.com
application: nectar-api
port: 3001
color: green
shortcut: green.sometest.com/api/index
- host: green.sometest.com
application: nectar-web
port: 3000
color: green
shortcut: green.sometest.com
- path: /api/*
application: nectar-api
port: 3001
color: blue
shortcut: sometest.com/api/index
- default: true
application: nectar-web
port: 3000
color: blue
shortcut: sometest.com
Field | Value | Description | Supports inputs? |
listener type | http|https |
(Mandatory) When defining an https listener, you must also specify a certificate. |
No |
listener port | numeric | (Mandatory) | Yes |
redirect_to_listener | Port of the target listener |
(Optional) Enables redirecting inbound traffic to another listener. For example, redirecting an http request to secure port 443 is a standard pattern that is used in almost all internet websites worldwide:
|
No |
certificate | string |
(Mandatory for https listeners) Must be present when the listener is of type HTTPS.
|
Yes |
path | string |
(Optional) The path rule indicates that only requests for a URL that has the defined path will be routed to the specified application. A rule can include both a host and a path. |
No |
host | string |
(Optional) The host rule indicates that only requests with the defined host header will be routed to the specified application. A rule can include both a host and a path. |
No |
application | string | (Optional) The name of the application that the traffic should be forwarded to. | No |
port | numeric | (Optional) The application port to which traffic should be forwarded. | Yes |
color | green|blue |
(Applies to production environments) Determines which flavor of the application to access. Note that green rules will result in error if the green instances are not powered on. |
No |
shortcut | string |
(Optional) The shortcut rule to customize the application link’s display text in the Summary page of the sandbox or production environment. If you don’t have an ingress section, Colony will create a display link containing the sandbox’s public IP address and listener port for the application. Note that the application link will not be displayed if the blueprint yaml has an ingress section but no shortcut. |
Yes |
default | true|false |
(Optional) If true, this rule will be used for default routing to define on the ALB when no other rules apply. If this listener is missing and no other listener is activated, an error is displayed. Do not use more than one default rule per blueprint. This listener should appear at the bottom of the ingress section, after all other listeners. |
No |
Load balancers
It is also possible to disable the creation of an ALB/AG in the environment as the entry point to the sandbox, to reduce deployment time, especially on private environments, and to reduce the cost of the environment. This is done by including:
ingress:
enabled: false
NOTE: Disabling the ALB or AG means that each application will get its own IP and open ports, and that the public health-check will be disabled. Note that ALB or AG will be disabled by default if no application in the environment exposes an external port.
Related Topics
- Disabling the Use of Load Balancers in Your Blueprint
- Using Existing Application Load Balancers (ALB) in Production Environments
- Redirecting Traffic to Certain Sandbox Applications.
Inputs
The inputs section is where you declare your blueprint parameters. Parameters are defined in your blueprint and can then be passed on to applications. Any input parameter specified under the inputs section of the blueprint can be provided by the user, API or CI plugin when creating a sandbox from this blueprint.
Field | Value | Description | Supports Inputs? |
input_name |
Name of the input parameter. |
No |
|
When declaring a parameter, optionally use any of the following properties: | |||
display_style | string |
To display the password in plain text in the UI, do not assign a value. To hide the password behind bullets, enter the value 'masked'. |
No |
description | string | In the relevant UI field, enter a description to be displayed to the user. | No |
default_value | string | When the sandbox is created, Colony automatically populates the default value. The end-user can choose to edit the value or leave it as-is. | No |
optional | boolean | When optional is set to true, the user can leave the parameter empty. When optional is set to false, empty value(s) will result in validation error(s). | No |
The inputs are defined as follows:
spec_version: 1
kind: blueprint
inputs:
- input_name:
display_style: masked
description: please set the Apache server's port
default_value: 1234
optional: true
applications:
- apache:
instances: 1
input_values:
- port_number : $input_name
There are two ways to declare parameters: short form and long form.
Short form example:
inputs:
- apache_port: 1234
Long form example:
inputs:
- apache_port:
default_value: 1234
However, if you want to change more than one property of the parameter, you MUST use the long form (if you don't, the YAML file will not be valid). For example:
inputs:
- apache_port:
display_style: masked
default_value: 1234
NOTE: The input names should be unique, must start with an alphabetic character or an underscore: "_", followed by a string of alphanumeric characters or "_". To use the input in any of the supported fields, enter the $ sign followed by the input name.
The value for the input can be one of the following:
- Hardcoded value
- Value coming from a blueprint input in the format ${input_name}
- Output of another application ($colony.applications.app_name.outputs.output_name) or service ($colony.services.service_name.outputs.output_name)
In addition, you can use the following reserved input values that are calculated by Colony during the sandbox deployment:
-
$colony.applications.app_name.dns: DNS name of the application app_name in the sandbox (as decided by the cloud provider)
-
$colony.environment.id: Sandbox Id of the given sandbox
-
$colony.applications.app_name.outputs.output_name: Output of another application
- $colony.services.service_name.outputs.output_name: Output of another service
-
$colony.environment.public_address: Public address of the sandbox (as decided by the cloud provider)
-
$colony.environment.virtual_network_id: Existing virtual network id to be used by the Sandbox being deployed (VPC ID in AWS or VNET Name in Azure)
For example, if you want to pass an Application A's DNS name to Application B, you can set the following input parameter in Application B:
APP_A_DNS = $colony.applications.app_name.dns
And then you can access the DNS name in Application B's scripts using the input variable $APP_A_DNS.
NOTE: Once a parameter is defined it can optionally be assigned with a default value. Assigning a value is done in the applications section of your blueprint.
To learn more about creating input parameters and assigning them with values, see Working with Parameters.
Comments
0 comments
Please sign in to leave a comment.