Tech Adapter Setup
In order to support the Reverse Provisioning workflow, a tech adapter needs to expose the following endpoints that will be called by the Provisioning Coordinator.
Always double check the Tech Adapter API reference to stay updated on tech adapter endpoints and their properties.
Execution Endpoint
POST /v1/reverse-provisioning
Used to trigger a reverse provisioning operation.
Field | Type | Description |
---|---|---|
useCaseTemplateId | string | The use case template ID of the entity being reverse provisioned. |
environment | string | The target environment selected by the user (e.g., dev , prod ) where the reverse provisioning operation should take place. |
params | object (optional) | Input parameters (see section below) |
catalogInfo | object (optional) | The current contents of the entity's catalog-info.yaml file. In case of Skeleton Entities, this is the rendered (non-templated) version of its catalog-info file. |
Example:
{
"useCaseTemplateId": "urn:dmb:utm:op-standard:0.0.0",
"environment": "development",
"params": {
"field1": "value1",
"field2": 2
// ...
},
"catalogInfo": {
// ...
"kind": "Component",
"metadata": {
"name": "my-component"
}
"spec": {
// ...
}
}
}
Parameters
The params
field in the request body includes different kind of values.
User inputs
These are the form values provided by the user through the Reverse Provisioning Wizard, and are defined in the corresponding Reverse Provisioning Template under spec.parameters
.
Environment-specific configuration
The environmentSpecificConfig
key will contain the content of the environments/<env>/configurations.yaml
file located in the entity's repository. Here, <env>
matches the value of the environment
field in the request body.
By default, the key name is environmentSpecificConfig
, but this can be customized in app-config.yaml
as follows:
mesh:
builder:
scaffolder:
reverseProvisioning:
environmentSpecificConfigKey: customKeyName
Entity Descriptor
The descriptor
key will contain the full descriptor of the data project to which the entity belongs.
By default, the key name is descriptor
, but this too can be customized in app-config.yaml
under mesh.builder.scaffolder.reverseProvisioning.descriptorKey
.
Catalog Data
The catalogData
field is included only for Skeleton Entities.
If this field is not present, the entity is a Legacy Entity.
The key used for this block can be configured via mesh.builder.scaffolder.reverseProvisioning.catalogDataKey
.
Its structure is as follows:
"catalogData": {
"type": "skeleton",
"parameters": {
"name": "entity-name",
"param2": "..."
}
}
- The
type
is always set toskeleton
to indicate the entity format. - The
parameters
object represents the current contents of theparameters:
section from the entity'sparameters.yaml
file. - Skeleton Entities expect updates to be returned as changes to these parameters.
Responses
200 OK
: The response directly includes aResult object
.202 Accepted
: A reverse provisioning task token is returned. It can be used to poll for the task results.
Check the API reference for the complete list of expected response codes.
The token solution is suggested for long-running reverse provisioning operations.
Status Endpoint
POST /v1/reverse-provisioning/{token}/status
Used to poll for the status and result of a reverse provisioning operation when the execution endpoint returns a token instead of directly the reverse provisioning result.
Its 200 response includes a Result object
. The Provisioning Coordinator will stop polling this endpoint when it receives a result object with status field different from RUNNING
.
Result Object
field | type | description |
---|---|---|
status | enum (RUNNING, COMPLETED, FAILED) | Status of the reverse provisioning operation |
updates | object (see section below) | Updates to be applied to the entity's definition |
logs | list[Log] (optional) | Execution logs that will be displayed on the UI during the reverse provisioning operation. Each time a call is made to this endpoint, the number of returned logs can increase, but under no circumstances should these logs ever be removed (i.e. no delta logic). |
Log object
field | type | description |
---|---|---|
timestamp | string | ISO date (e.g. 2023-07-13T12:05:55.259Z). Logs with a timestamp before the reverse provisioning operation start time or after the stop time will not be displayed on the UI |
level | enum (DEBUG, INFO, WARNING, ERROR) | Severity level. The UI will display every kind of log (i.e. even DEBUG logs are delivered to the user) |
message | string | Log message |
Updates object
The updates
object defines the changes to be applied to the entity after a successful Reverse Provisioning operation. Its expected structure and behavior differ depending on whether the entity is a Skeleton or Legacy entity.
Skeleton Entities
For Skeleton Entities, the updates object must follow this format:
{
"parameters": {
"param1": "updated value",
"param2": "another updated value"
}
}
- The
parameters
block contains updates for theparameters.yaml
file under itsparameters:
section. - Updates follow a merge strategy: existing keys are updated, and new keys are added. Other existing keys not mentioned in the update remain unchanged.
Example
Given the original parameters.yaml
:
parameters:
email: original@example.com
owner: user:admin
And this update from the Reverse Provisioning operation:
{
"parameters": {
"email": "new@example.com"
}
}
The resulting parameters.yaml
would be:
parameters:
email: new@example.com # updated
owner: user:admin # unchanged
Legacy Entities
For Legacy Entities, updates are expressed as dot-path keys that modify fields directly in the catalog-info.yaml
file.
The updates
object allows to express 2 different update cases, based on the provided keys.
Let's consider the following sample catalog-info.yaml
of a deployment unit component
apiVersion: backstage.io/v1alpha1
kind: Component
metadata:
name: marketing.new-dp.0.snowflake-output-port-2
description: 'Snowflake Output Port 2'
annotations:
backstage.io/techdocs-ref: dir:.
links: []
tags:
- outputport
- snowflake
spec:
type: outputport
lifecycle: experimental
owner: 'user:antonio.dambrosio_agilelab.it'
system: 'system:marketing.new-dp.0'
domain: 'domain:marketing'
mesh:
name: 'Snowflake Output Port 2'
fullyQualifiedName: 'Snowflake Output Port 2'
description: 'Snowflake Output Port 2'
Case 1. Upsert the value of an object's attribute
To update the value of spec.mesh.description
without affecting its sibiling fields in the same object, the updates object should be
{
"spec.mesh.description": "Updated value"
}
The resulting spec.mesh
property will be:
# ...
spec:
# ...
mesh:
name: 'Snowflake Output Port 2'
fullyQualifiedName: 'Snowflake Output Port 2'
description: 'Updated value' # updated value
Case 2. Upsert an entire object
To overwrite the whole spec.mesh
object, the updates object should be:
{
"spec.mesh": {
"description": "Updated value"
}
}
The resulting spec.mesh
field will be:
# ...
spec:
# ...
mesh: # updated object
description: 'Updated value'
Note how the other fields inside spec.mesh
have been removed
The updates object can include multiple update directives and even mix both cases. For instance
{
"metadata.description": "New Description",
"spec.mesh": {
"description": "Updated value"
}
}
will result in the following catalog-info.yaml
:
# ...
metadata:
name: marketing.new-dp.0.snowflake-output-port-2
description: 'New Description' # updated field
annotations:
backstage.io/techdocs-ref: dir:.
links: []
tags:
- outputport
- snowflake
spec:
# ...
mesh: # updated object
description: 'Updated value'
When in doubt keep in mind that each root-level key in the updates object is interpreted as follows:
Given key
field.sub-field.sub-sub-field
and valuenew value
, head to thecatalog-info.yaml
, navigate tofield
, navigate to its subfieldsub-field
, navigate to its subfieldsub-sub-field
then replace its value withnew-value
.
On the left side of a dot (.
) we always have an object, on the right side, one of its properties (that in turn can be an object itself and be on the left side of another dot).
Updates are applied with upsert logic. So, if field.sub-field.sub-sub-field
is not there, it will be created.
In case field.sub-field
is not an object (for instance, it is a string), an update on field.sub-field.sub-sub-field
will replace the previous value of field.sub-field
with an object containing a single property sub-sub-field
.
Arrays are treated as opaque values. Hence, an array field update will replace the whole content of the old array.
Given catalog-info.yaml
# ...
metadata:
name: marketing.new-dp.0.snowflake-output-port-2
description: 'New Description'
links: []
tags:
- outputport
- snowflake
spec:
# ...
mesh:
description: 'Updated value'
and updates object
{
"metadata.tags": ["new-tag-1", "new-tag-2", "new-tag-3"]
}
the resulting catalog-info.yaml
will be:
# ...
metadata:
name: marketing.new-dp.0.snowflake-output-port-2
description: 'New Description'
links: []
tags: # tags array has been completely overidden
- 'new-tag-1'
- 'new-tag-2'
- 'new-tag-3'
spec:
# ...
mesh:
description: 'Updated value'
Backtick notation
A property in a catalog-info.yaml
might contain a dot inside its name. E.g.
# ...
metadata:
name: marketing.new-dp.0.snowflake-output-port-2
description: 'Desc'
spec:
# ...
mesh:
description: 'Description'
property.with.dots: value # name with dots
If you need to selectively update the value of property.with.dots
(without affecting other fields in the mesh
object) you can reference the property by enclosing its name between backticks (`
). For instance:
{
"spec.mesh.`property.with.dots`": "New value"
}
will produce the following update:
# ...
metadata:
name: marketing.new-dp.0.snowflake-output-port-2
description: 'Desc'
spec:
# ...
mesh:
description: 'Description'
property.with.dots: New value # updated value
Dot-navigation happens only on the root-level keys of the updates object. For instance, you are updating the entire mesh
object, then property.with.dots
doesn't have to be enclosed in backticks.
E.g.
{
"spec.mesh": { // root-level key
"description": "Updated value",
"property.with.dots": "New value 2", // no backticks (here it is not a root-level key)
"anotherProp": "anotherVal"
}
}
will produce
# ...
metadata:
name: marketing.new-dp.0.snowflake-output-port-2
description: 'Desc'
spec:
# ...
mesh: # updated object
description: Updated value
property.with.dots: New value
anotherProp: anotherVal
Compatibility
Some older tech adapters may not yet support updates targeting Skeleton Entities, which expect changes to be applied through their parameters.yaml
file.
To maintain compatibility without modifying the tech adapter's logic, you can configure your Reverse Provisioning Template to translate updates targeting catalog-info.yaml
paths into updates to the appropriate parameters. This allows the tech adapter to continue sending updates in the legacy format while still supporting Skeleton Entities.
Refer to the template documentation to learn how to properly configure the compatibility options.