Technology Adapter Setup
In order to support the reverse provisioning workflow, a technology adapter has to expose the following endpoints that will be called by the Provisioning Coordinator.
Always double check the Technology Adapter API reference to stay updated on technology adapter endpoints and their properties.
Execution Endpoint
POST /v1/reverse-provisioning
Used to trigger a reverse provisioning operation.
field | type | description |
---|---|---|
useCaseTemplateId | string | Component's use case template id |
environment | string | Target environment |
params | object (optional) | Reverse provisioning input params (coming from the Reverse Provisioning Wizard and defined in the Reverse Provisioning Template) |
catalogInfo | object (optional) | Content of the current catalog-info.yaml of the component |
Example:
{
"useCaseTemplateId": "urn:dmb:utm:op-standard:0.0.0",
"environment": "development",
"params": {
"field1": "value1",
"field2": 2
},
"catalogInfo": {
// ...
"kind": "Component",
"metadata": {
"name": "my-component"
}
"spec": {
// ...
}
}
}
Responses
200 OK
: The response directly includes aResult object
.202 Accepted
: A reverse provisioning task token is returned. It can be used to poll for the task results.
Check the API reference for the complete list of expected response codes.
The token solution is suggested for long-running reverse provisioning operations.
Status Endpoint
POST /v1/reverse-provisioning/{token}/status
Used to poll for the status and result of a reverse provisioning operation when the execution endpoint returns a token instead of directly the reverse provisioning result.
Its 200 response includes a Result object
. The Provisioning Coordinator will stop polling this endpoint when it receives a result object with status field different from RUNNING
.
Result Object
field | type | description |
---|---|---|
status | enum (RUNNING, COMPLETED, FAILED) | Status of the reverse provisioning operation |
updates | object (see section below) | Updates to be applied to the component's catalog-info.yaml |
logs | list[Log] (optional) | Execution logs that will be displayed on the UI during the reverse provisioning operation. Each time a call is made to this endpoint, the number of returned logs can increase, but under no circumstances should these logs ever be removed (i.e. no delta logic). |
Log object
field | type | description |
---|---|---|
timestamp | string | ISO date (e.g. 2023-07-13T12:05:55.259Z). Logs with a timestamp before the reverse provisioning operation start time or after the stop time will not be displayed on the UI |
level | enum (DEBUG, INFO, WARNING, ERROR) | Severity level. The UI will display every kind of log (i.e. even DEBUG logs are delivered to the user) |
message | string | Log message |
Updates object
The updates
object allows to express 2 different update cases, based on the provided keys.
Let's consider the following sample catalog-info.yaml
of a deployment unit component
apiVersion: backstage.io/v1alpha1
kind: Component
metadata:
name: marketing.new-dp.0.snowflake-output-port-2
description: 'Snowflake Output Port 2'
annotations:
backstage.io/techdocs-ref: dir:.
links: []
tags:
- outputport
- snowflake
spec:
type: outputport
lifecycle: experimental
owner: 'user:antonio.dambrosio_agilelab.it'
system: 'system:marketing.new-dp.0'
domain: 'domain:marketing'
mesh:
name: 'Snowflake Output Port 2'
fullyQualifiedName: 'Snowflake Output Port 2'
description: 'Snowflake Output Port 2'
Case 1. Upsert the value of an object's attribute
To update the value of spec.mesh.description
without affecting its sibiling fields in the same object, the updates object should be
{
"spec.mesh.description": "Updated value"
}
The resulting spec.mesh
property will be:
# ...
spec:
# ...
mesh:
name: 'Snowflake Output Port 2'
fullyQualifiedName: 'Snowflake Output Port 2'
description: 'Updated value' # updated value
Case 2. Upsert an entire object
To overwrite the whole spec.mesh
object, the updates object should be:
{
"spec.mesh": {
"description": "Updated value"
}
}
The resulting spec.mesh
field will be:
# ...
spec:
# ...
mesh: # updated object
description: 'Updated value'
Note how the other fields inside spec.mesh
have been removed
The updates object can include multiple update directives and even mix both cases. For instance
{
"metadata.description": "New Description",
"spec.mesh": {
"description": "Updated value"
}
}
will result in the following catalog-info.yaml
:
# ...
metadata:
name: marketing.new-dp.0.snowflake-output-port-2
description: 'New Description' # updated field
annotations:
backstage.io/techdocs-ref: dir:.
links: []
tags:
- outputport
- snowflake
spec:
# ...
mesh: # updated object
description: 'Updated value'
When in doubt keep in mind that each root-level key in the updates object is interpreted as follows:
Given key
field.sub-field.sub-sub-field
and valuenew value
, head to thecatalog-info.yaml
, navigate tofield
, navigate to its subfieldsub-field
, navigate to its subfieldsub-sub-field
then replace its value withnew-value
.
On the left side of a dot (.
) we always have an object, on the right side, one of its properties (that in turn can be an object itself and be on the left side of another dot).
Updates are applied with upsert logic. So, if field.sub-field.sub-sub-field
is not there, it will be created.
In case field.sub-field
is not an object (for instance, it is a string), an update on field.sub-field.sub-sub-field
will replace the previous value of field.sub-field
with an object containing a single property sub-sub-field
.
Arrays are treated as opaque values. Hence, an array field update will replace the whole content of the old array.
Given catalog-info.yaml
# ...
metadata:
name: marketing.new-dp.0.snowflake-output-port-2
description: 'New Description'
links: []
tags:
- outputport
- snowflake
spec:
# ...
mesh:
description: 'Updated value'
and updates object
{
"metadata.tags": ["new-tag-1", "new-tag-2", "new-tag-3"]
}
the resulting catalog-info.yaml
will be:
# ...
metadata:
name: marketing.new-dp.0.snowflake-output-port-2
description: 'New Description'
links: []
tags: # tags array has been completely overidden
- 'new-tag-1'
- 'new-tag-2'
- 'new-tag-3'
spec:
# ...
mesh:
description: 'Updated value'
Backtick notation
A property in a catalog-info.yaml
might contain a dot inside its name. E.g.
# ...
metadata:
name: marketing.new-dp.0.snowflake-output-port-2
description: 'Desc'
spec:
# ...
mesh:
description: 'Description'
property.with.dots: value # name with dots
If you need to selectively update the value of property.with.dots
(without affecting other fields in the mesh
object) you can reference the property by enclosing its name between backticks (`
). For instance:
{
"spec.mesh.`property.with.dots`": "New value"
}
will produce the following update:
# ...
metadata:
name: marketing.new-dp.0.snowflake-output-port-2
description: 'Desc'
spec:
# ...
mesh:
description: 'Description'
property.with.dots: New value # updated value
Dot-navigation happens only on the root-level keys of the updates object. For instance, you are updating the entire mesh
object, then property.with.dots
doesn't have to be enclosed in backticks.
E.g.
{
"spec.mesh": { // root-level key
"description": "Updated value",
"property.with.dots": "New value 2", // no backticks (here it is not a root-level key)
"anotherProp": "anotherVal"
}
}
will produce
# ...
metadata:
name: marketing.new-dp.0.snowflake-output-port-2
description: 'Desc'
spec:
# ...
mesh: # updated object
description: Updated value
property.with.dots: New value
anotherProp: anotherVal
Notes
- Updates sent by specific provisioners are NOT directly applied to the component's
catalog-info.yaml
. They will be presented to the user that will decide whether to keep them or not. - Key
spec.components
is considered as a reserved key due to upcoming developments on the component'scatalog-info.yaml
. Updates on that key are not yet recommended as they may not behave as expected. The user will always be able to manually edit it.