What is APIOps?
APIOps is the process of applying DevOps to the Azure API Management (APIM) service. By applying this concept to APIM, you can bring your APIs into version control, and most of the operations that were once done in the Azure Portal can now be achieved through code using CI/CD pipelines with APIOps.
The benefits of this method of working are that it enables many teams to become involved in the API lifecycle, increasing collaboration between teams. In addition to version control for your API developer and design lifecycle workflows, APIOps provides audit history, a backup of your APIM instances through an extraction pipeline and commit history, and greater governance and a consistent APIM deployment experience.
In this blog, I’ll share insights on implementing this for a multi-environment APIOps setup, lessons learnt, tips, and examples. This blog will focus on those who have a dev/test/prod style APIM setup, and how that may look when gearing up to adopt APIOps.
I hope this will provide some valuable insights for those organisations looking to adopt APIOps. Whilst this write-up is targeted at Azure DevOps, it is an almost identical concept for GitHub as well.
1. Increased collaboration between teams (Developers, Platform/Infrastructure/Management Stakeholders)
2. Version control, audit history
3. APIM Backup in code
4. Standardised deployments
5. Consistent deployments
Where to get started?
Firstly, before we delve into the technical deep dive, it’s worth pointing out some details about the project.
The Azure/apiops project is open-source and is not officially supported by Microsoft. It is a best-effort project, relying on contributions. However, it is largely led and organised by Wael Kdouh, a Microsoft FTE, along with guythetechie, who is heavily involved in the project. The project can be found here:
https://github.com/Azure/apiops
The documentation for APIOps is thorough and well-documented. There is even a YouTube video that sets the scene for what to expect and how to get started at a basic level. The setup will guide you through a basic configuration in Azure DevOps with a development environment.
Check it out here to get familiar with APIOps: https://azure.github.io/apiops, specifically review the supported scenarios section: Supported Scenarios | APIOps – Documentation (azure.github.io)
Extraction
If you’ve followed the setup steps from the documentation, you will be in a position where you have run the ‘APIM Extractor’ pipeline to pull all your development APIM into Git as code. It’s important to note that the artefacts being stored in Git are supposed to be from your lowest environment (i.e., Dev APIM). You do not need to extract other environments unless you wish to have a ‘backup’ for that instance—more on that later.
The structure will look similar to this once extracted:
Readiness and standardisation
The major hurdle to adopting APIOps is that you must standardise your APIM properties. By this, I mean that all named values, APIs, loggers, diagnostics, backends, etc., must have the same names across all environments.
This is because when using APIOps for multi-environment setups with dev/test/prod, you will only overwrite the value for the environment, not the names. This is the way it is designed to work as part of the workflow and may be the biggest burden if you’re adopting an existing setup, especially if some less-than-ideal practices have been implemented (another win for APIOps once you are done 😎).
Now that you have APIM in code, you can begin to refactor where necessary.
API Policies
Check that your API policies have no hard-coded values such as secrets (I really hope not!), application IDs, CORS URLs, etc. If they do, they will need to be moved to a named value variable as part of the readiness for adoption. Microsoft has good documentation on this here. Additionally, here’s some examples that demonstrate the shift from your potential current state to where you potentially need to refactor towards for APIOps adoption:
DO THIS ✅
<policies>
<inbound>
<base />
<choose>
<when condition="@(context.Variables.GetValueOrDefault("iss", "").Equals("https://sts.windows.net/someTenantId/"))">
<validate-azure-ad-token tenant-id="00000000-0000-0000-0000-000000000000">
<client-application-ids>
<application-id>{{api-appId}}</application-id>
</client-application-ids>
</validate-azure-ad-token>
</when>
XMLINSTEAD OF THIS ❌
<policies>
<inbound>
<base />
<choose>
<when condition="@(context.Variables.GetValueOrDefault("iss", "").Equals("https://sts.windows.net/someTenantId/"))">
<validate-azure-ad-token tenant-id="00000000-0000-0000-0000-000000000000">
<client-application-ids>
<application-id>bccc8b07-147f-4502-9fcf-bf2125adc4e1</application-id>
</client-application-ids>
</validate-azure-ad-token>
</when>
XMLCORS
With cross-origin resource sharing, you can also shift these to a variable that links to a named value.
DO THIS ✅
<allowed-origins>
<origin>@{
string[] allowedOrigins = "{{api-allowed-origins}}"
.Replace(" ", string.Empty)
.Split(',');
string requestOrigin = context.Request.Headers.GetValueOrDefault("Origin", "");
bool isAllowed = Array.Exists(allowedOrigins, origin => origin == requestOrigin);
return isAllowed ? requestOrigin : string.Empty;
}</origin>
</allowed-origins>
XMLThe named values can be URLs that are separated by commas: https://someurl, https://anotherurl, https://localhost
INSTEAD OF THIS ❌
<allowed-origins>
<origin>http://localhost:7890</origin>
</allowed-origins>
XMLBackends
All backend names need to be consistent, allowing you to override the values of these in the environment YAML files that APIOps adopts for multi-environment publishing. For example, if your API backends have names like “dev-someApi-backend,” you will want to change this to “someApi-backend.”
This way, when using the overrides, you will be able to amend the Azure resource ID and Runtime URLs for the higher APIM instances in test & production, for example.
Named values
For all sensitive or secret values, it is strongly recommended to link these to a Key Vault to fetch from as best practice. For all other non-sensitive values, they can remain as plain values that can be referenced. The named values need to be standardised and should not be environment-specific. If your APIM named values have names such as ‘dev-api-secret’ or similar, you’ll want to standardise the name (e.g. to api-secret). This way, you can override the value in the config YAML file through APIOps when publishing in higher APIM environments.
Application Insights
You will need to ensure that the loggers and diagnostics have a static name, for example ‘applicationInsights’ and not ‘prod-appInisghts-apim’. If you need to correct this, create a new folder with the new desired name:
loggerInformation.json
example:
{
"properties": {
"loggerType": "applicationInsights",
"credentials": {
"instrumentationKey": "{{applicationInsights-instrumentation-key}}" // named value that links to Key Vault
},
"isBuffered": true,
"resourceId": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/rg-uks-dev-apim/providers/microsoft.insights/components/app-insights-dev"
}
}
JSONHigher APIM instances (test/prod) override these values on deployment in Azure DevOps pipelines, using the configuration.env.yaml
files to set the desired value necessary for that specific APIM instance. See below for the diagnostics and logger section.
Overrides
For each environment, you’ll need an override YAML configuration file. These files live in the root of the repository; see the structure screenshot from earlier for reference.
These files are then used to override the values (not names) between APIM instances through APIOps. You’ll only need to override properties that need changing in higher environments (such as resourceIds, service URLs, backends, etc). Although you can override many properties, values, and metadata (like descriptions) if you wish also.
The YAML file -should- accept any APIM rest API properties, which can be found here. However, there is an example override file in the APIOps root repository for you to take reference from here. Additionally, I’ve included a template to provide some insight below of what an override file could look like.
# configuration.prod.yaml example
#########################
## MARK: APIM Instance ##
#########################
apimServiceName: apim-uks-prod-rios
#########################
## MARK: Named Values ##
#########################
namedValues:
- name: api-appId
properties:
displayName: api-appId
secret: false
value: 00000000-0000-0000-0000-000000000000
- name: api-allowed-origins
properties:
displayName: api-allowed-origins
secret: false
value: https://localhost:98672, https://localhost:47836
- name: super-secret
properties:
displayName: super-secret
keyVault:
secretIdentifier: "https://kv-apiops-riosengineer-prod.vault.azure.net/secrets/super-secret"
secret: true
###################
## MARK: Loggers ##
###################
loggers:
- name: applicationInsights
properties:
loggerType: applicationInsights
credentials:
instrumentationKey: "{{applicationInsights}}"
resourceId: /subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/rg-rios-apim-engineer/providers/microsoft.insights/components/app-insights-apim-prod
#######################
## MARK: Diagnostics ##
#######################
diagnostics:
- name: applicationInsights
properties:
verbosity: Information
loggerId: "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/rg-rios-apim-engineer/providers/Microsoft.ApiManagement/service/apim-uks-prod-rios/loggers/applicationInsights"
####################
## MARK: Backends ##
####################
backends:
- name: someApi-backend
properties:
resourceId: https://management.azure.com/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/rg-rios-apim-engineer/providers/Microsoft.Web/sites/func-rios-engineer-prod
url: https://func-rios-engineer-prod.azurewebsites.net/api
################
## MARK: APIs ##
################
apis:
- name: someApi
properties:
serviceUrl: https://apim-uks-prod-rios.azure-api.net/api/v1
apiVersion: v1
apiVersionSetId: /subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/rg-rios-apim-engineer/providers/Microsoft.ApiManagement/service/apim-uks-prod-rios/apiVersionSets/673hd6ha701514sa825ha1
diagnostics:
- name: applicationInsights
properties:
verbosity: Information
loggerId: "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/rg-rios-apim-engineer/providers/Microsoft.ApiManagement/service/apim-uks-prod-rios/loggers/applicationInsights"
YAMLAzure DevOps Environments
Prepare your Azure DevOps environments so that you can point each publish stage to these for approval gates and historical job logs.
Once created, by navigating within the environment you can add users to ‘Approvals and checks’ tab at the top.
Read more from the Microsoft Docs here: Create and target environments – Azure Pipelines | Microsoft Learn
Azure DevOps Connections
As part of preparing for a multi-stage deployment environment in APIOps, you will need to create (preferably) a workload identity for each env. Microsoft Learn has this well covered once again, clicky. Be sure to add these into your Azure DevOps Library for later use.
Publisher Pipeline
Setting up the publisher pipeline to accommodate multiple deployment environments requires some slight modification to get started. You will need to locate the publisher in tools/pipelines/run-publisher.yaml
and add additional YAML stages for each environment you have, pointing each to their own YAML configuration override file. In addition, you will need to make sure that within Azure DevOps Library for apim-automation has the additional variables added (see more: Configure APIM tools in Azure DevOps | APIOps – Documentation).
For example, to add a 3 stag deployment (dev/test/prod), it would look like this:
stages:
- stage: push_changes_to_Dev_APIM
displayName: Push changes to Dev APIM
jobs:
- job: push_changes_to_Dev_APIM
displayName: Push changes to Dev APIM
pool:
vmImage: ubuntu-latest
steps:
- template: run-publisher-with-env.yaml
parameters:
API_MANAGEMENT_SERVICE_OUTPUT_FOLDER_PATH: ${{ parameters.API_MANAGEMENT_SERVICE_OUTPUT_FOLDER_PATH }}
RESOURCE_GROUP_NAME: $(RESOURCE_GROUP_NAME)
API_MANAGEMENT_SERVICE_NAME: $(APIM_NAME)
ENVIRONMENT: "Dev"
COMMIT_ID: ${{ parameters.COMMIT_ID }}
SERVICE_CONNECTION_NAME_ENV: $(SERVICE_CONNECTION_NAME)
- stage: push_changes_to_Test_APIM
displayName: Push changes to Test APIM
jobs:
- deployment: push_changes_to_Test_APIM
displayName: Push changes to Test APIM
variables:
#setting the testSecretValue to the test resource group name as an example
testSecretValue: $(RESOURCE_GROUP_NAME_Test)
pool:
vmImage: ubuntu-latest
# creates an environment if it doesn't exist
environment: 'Test'
strategy:
# default deployment strategy, more coming...
runOnce:
deploy:
steps:
- template: run-publisher-with-env.yaml
parameters:
API_MANAGEMENT_SERVICE_OUTPUT_FOLDER_PATH: ${{ parameters.API_MANAGEMENT_SERVICE_OUTPUT_FOLDER_PATH }}
RESOURCE_GROUP_NAME: $(RESOURCE_GROUP_NAME_Test)
CONFIGURATION_YAML_PATH: $(Build.SourcesDirectory)/configuration.test.yaml
ENVIRONMENT: "Test"
COMMIT_ID: ${{ parameters.COMMIT_ID }}
SERVICE_CONNECTION_NAME_ENV: $(SERVICE_CONNECTION_NAME_TEST)
- stage: push_changes_to_Prod_APIM
displayName: Push changes to Prod APIM
jobs:
- deployment: push_changes_to_Prod_APIM
displayName: Push changes to Prod APIM
variables:
#setting the testSecretValue to the prod resource group name as an example
testSecretValue: $(RESOURCE_GROUP_NAME_Prod)
pool:
vmImage: ubuntu-latest
# creates an environment if it doesn't exist
environment: 'Prod'
strategy:
# default deployment strategy, more coming...
runOnce:
deploy:
steps:
- template: run-publisher-with-env.yaml
parameters:
API_MANAGEMENT_SERVICE_OUTPUT_FOLDER_PATH: ${{ parameters.API_MANAGEMENT_SERVICE_OUTPUT_FOLDER_PATH }}
RESOURCE_GROUP_NAME: $(RESOURCE_GROUP_NAME_Prod)
CONFIGURATION_YAML_PATH: $(Build.SourcesDirectory)/configuration.prod.yaml
ENVIRONMENT: "Prod"
COMMIT_ID: ${{ parameters.COMMIT_ID }}
SERVICE_CONNECTION_NAME_ENV: $(SERVICE_CONNECTION_NAME_PROD)
XMLEnd state, the publisher pipeline will look something similar to the above, with test and production asking for approval before deploying the stage.
Spectral analysis & API Center?
There is a fantastic open-source linting tool called Spectral by Spotlight, which can be used as part of your Pull Request policy pipeline. The documentation for Spectral is brilliant, including an installation guide which can be found here. The tool runs rules against your API specifications using OpenAPI v2/v3 rules to help with governance and best practices for your API specs.
In my opinion, you need to align to the OpenAPI V3 spec so you can enhance the build validation on PR. This way, you can spot validation errors on your organisations API specifications before the publisher pipeline runs, and potentially fails at the APIM REST API level. See the gotchas below on why this can be problematic.
In addition, Microsoft have recently announced API Center into Preview, , which aims to help track all your organisation’s APIs under one central location for discovery, reuse, and governance. This may be useful for organisations with a sprawling API estate that they need to get visibility over. I look forward to seeing how this will evolve and integrate into APIM in the near future.
Gotchas
- If your ADO / GitHub Service Connection has access to multiple subscriptions, publishing may fail. Make sure to have one identity per environment with only access to the subscription necessary -or- modify the pipeline to set the necessary subscription
- Publisher only runs changes on the last commit. You can run the pipeline in full artifact mode, where it will redeploy all artifacts if a build fails
- Check your higher APIM instances for any discrepancies between dev / prod as these will need to be aligned or removed if not necessary. All environments must match in terms of the same number of versions, revisions, etc.
- Anything not explicitly in the override configuration will take it’s values from the Git repository artifact files. If you don’t include a description change in the override it will deploy whatever you see in the Git files. Overrides are the only way to change these when promoting in APIOps
- If you’re not aligning to the OpenAPI v3 specification you may find it difficult to spot validation errors before APIM pipeline tries to publish to the APIM REST API. This means that you can get into situations where you fail a deployment, run a fix, but it will only pickup the last commit and now will be ignore your previous commits changes. By incorporating OpenAPI spectral analysis in the PR you can spot these earlier and prevent this problem.
Conclusion
APIOps is a fantastic tool to help teams bring DevOps processes to API Management in Azure. By ensuring your APIM configuration has consistent settings, your organisation will be in a great position to begin adopting APIOps for a multi-environment setup.
A big thank you to Wael and guythetechie, who tirelessly run the project in their own time and help maintain APIOps!
I hope the deep dive and insights provide some value and save you some pain when trying to go down this path yourself, if you got this far in the blog that is 😆
Let me know in the comments: Are you using APIOps? How have you found it?
Latest Posts
Never miss an update: Azure Verified Modules with GitHub Bot & Teams
Getting started: Continuous deployment with Azure Bicep and Azure DevOps
Eliminate old API versions in your Azure Bicep templates