← Back to Guides

Load Testing API Management Through to Logic Apps

AdvancedAPI Management2026-03-14

Overview

Load testing your API Management to Logic Apps integration chain is critical before going live. This guide covers an end-to-end approach — from test planning through execution and analysis — using enterprise-grade tooling.

Microsoft Reference: Azure Load Testing documentation

Architecture Under Test

A typical enterprise integration flow through APIM to Logic Apps follows this path:

Client → API Management → Logic App (Standard) → Backend Services
           ↓                    ↓                      ↓
    Rate Limiting          Orchestration           Database / API
    Authentication         Transformation          Service Bus
    Caching                Error Handling          Storage

What to Load Test

Component What to Measure Why It Matters
API Management Gateway Throughput, latency, capacity % Gateway is a shared resource — saturation affects all APIs
APIM Policies Policy execution time overhead Complex policies (JWT validation, XSLT) add measurable latency
Logic App Triggers Trigger queue depth, trigger latency HTTP triggers have concurrency limits that can throttle intake
Logic App Workflow Run duration, action success rate Long-running workflows consume resources and may hit timeouts
Backend Services Response time under load Backends are often the true bottleneck in the chain
End-to-End Total request-to-response time The metric your consumers actually experience

Test Planning

Define Test Scenarios

Before writing any scripts, define your scenarios based on production usage patterns:

Scenario Description Target RPS Duration
Baseline Normal business hours traffic 50 10 min
Peak Load Expected peak (e.g. month-end batch) 200 15 min
Stress Test Beyond expected peak to find limits 500+ (ramp) 20 min
Soak Test Sustained load to detect memory leaks 100 2–4 hours
Spike Test Sudden traffic burst 0→300→0 5 min

Establish Acceptance Criteria

Define pass/fail criteria before testing — not after:

Metric Target Failure Threshold
p95 Response Time < 2 seconds > 5 seconds
p99 Response Time < 5 seconds > 10 seconds
Error Rate < 1% > 5%
APIM Capacity < 70% > 85%
Logic App Success Rate > 99% < 95%
Throughput Meets target RPS < 80% of target

Azure Load Testing (Recommended)

Azure Load Testing is the recommended approach for enterprise scenarios as it integrates natively with Azure Monitor, supports JMeter scripts, and can run from multiple Azure regions.

Infrastructure Setup

@description('Azure Load Testing resource')
resource loadTest 'Microsoft.LoadTestService/loadTests@2022-12-01' = {
  name: 'alt-apim-loadtest-${environment}'
  location: location
  identity: {
    type: 'SystemAssigned'
  }
  properties: {}
}

@description('Grant Load Testing access to read APIM metrics')
resource metricsReaderRole 'Microsoft.Authorization/roleAssignments@2022-04-01' = {
  name: guid(loadTest.id, apim.id, 'monitoring-reader')
  scope: apim
  properties: {
    roleDefinitionId: subscriptionResourceId(
      'Microsoft.Authorization/roleDefinitions',
      '43d0d8ad-25c7-4714-9337-8ba259a9fe05' // Monitoring Reader
    )
    principalId: loadTest.identity.principalId
    principalType: 'ServicePrincipal'
  }
}

Server-Side Metrics Configuration

Configure Azure Load Testing to collect APIM and Logic App metrics during the test run:

Resource Metric Aggregation
API Management Requests Total
API Management Failed Requests Total
API Management Duration Average, p95
API Management Capacity Average
API Management Backend Request Duration Average, p95
Logic App (Standard) Workflow Runs Started Total
Logic App (Standard) Workflow Runs Completed Total
Logic App (Standard) Workflow Runs Failed Total
Logic App (Standard) Workflow Run Duration Average
Logic App (Standard) Action Latency Average
Application Insights Server Response Time Average
Application Insights Failed Requests Total

JMeter Test Plan

The following JMeter test plan targets an APIM endpoint that fronts a Logic App. Save this as apim-logic-apps-load-test.jmx:

<?xml version="1.0" encoding="UTF-8"?>
<jmeterTestPlan version="1.2" properties="5.0" jmeter="5.6.3">
  <hashTree>
    <TestPlan guiclass="TestPlanGui" testclass="TestPlan" testname="APIM to Logic Apps Load Test">
      <boolProp name="TestPlan.functional_mode">false</boolProp>
      <elementProp name="TestPlan.user_defined_variables" elementType="Arguments">
        <collectionProp name="Arguments.arguments">
          <elementProp name="APIM_HOST" elementType="Argument">
            <stringProp name="Argument.name">APIM_HOST</stringProp>
            <stringProp name="Argument.value">${__P(apim_host,your-apim.azure-api.net)}</stringProp>
          </elementProp>
          <elementProp name="SUBSCRIPTION_KEY" elementType="Argument">
            <stringProp name="Argument.name">SUBSCRIPTION_KEY</stringProp>
            <stringProp name="Argument.value">${__P(subscription_key,)}</stringProp>
          </elementProp>
          <elementProp name="API_PATH" elementType="Argument">
            <stringProp name="Argument.name">API_PATH</stringProp>
            <stringProp name="Argument.value">${__P(api_path,/api/v1/orders)}</stringProp>
          </elementProp>
        </collectionProp>
      </elementProp>
    </TestPlan>
    <hashTree>

      <!-- Baseline Scenario: Steady load -->
      <ThreadGroup guiclass="ThreadGroupGui" testclass="ThreadGroup"
                   testname="Baseline - Steady State">
        <intProp name="ThreadGroup.num_threads">50</intProp>
        <intProp name="ThreadGroup.ramp_time">60</intProp>
        <boolProp name="ThreadGroup.same_user_on_next_iteration">false</boolProp>
        <elementProp name="ThreadGroup.main_controller" elementType="LoopController">
          <intProp name="LoopController.loops">-1</intProp>
        </elementProp>
        <boolProp name="ThreadGroup.scheduler">true</boolProp>
        <stringProp name="ThreadGroup.duration">600</stringProp>
      </ThreadGroup>
      <hashTree>

        <!-- CSV Data Set for test payloads -->
        <CSVDataSet guiclass="TestBeanGUI" testclass="CSVDataSet"
                    testname="Order Test Data">
          <stringProp name="filename">test-data/orders.csv</stringProp>
          <stringProp name="variableNames">orderId,customerName,product,quantity,region</stringProp>
          <stringProp name="delimiter">,</stringProp>
          <boolProp name="recycle">true</boolProp>
          <boolProp name="stopThread">false</boolProp>
          <stringProp name="shareMode">shareMode.all</stringProp>
        </CSVDataSet>

        <!-- POST request to APIM -->
        <HTTPSamplerProxy guiclass="HttpTestSampleGui" testclass="HTTPSamplerProxy"
                          testname="POST Order via APIM">
          <stringProp name="HTTPSampler.domain">${APIM_HOST}</stringProp>
          <intProp name="HTTPSampler.port">443</intProp>
          <stringProp name="HTTPSampler.protocol">https</stringProp>
          <stringProp name="HTTPSampler.path">${API_PATH}</stringProp>
          <stringProp name="HTTPSampler.method">POST</stringProp>
          <boolProp name="HTTPSampler.use_keepalive">true</boolProp>
          <boolProp name="HTTPSampler.postBodyRaw">true</boolProp>
          <elementProp name="HTTPsampler.Arguments" elementType="Arguments">
            <collectionProp name="Arguments.arguments">
              <elementProp name="" elementType="HTTPArgument">
                <stringProp name="Argument.value">{
  "orderId": "${orderId}",
  "customerName": "${customerName}",
  "product": "${product}",
  "quantity": ${quantity},
  "region": "${region}",
  "timestamp": "${__time(yyyy-MM-dd'T'HH:mm:ss.SSS'Z',)}",
  "correlationId": "${__UUID()}"
}</stringProp>
              </elementProp>
            </collectionProp>
          </elementProp>
        </HTTPSamplerProxy>
        <hashTree>
          <!-- Request Headers -->
          <HeaderManager guiclass="HeaderPanel" testclass="HeaderManager"
                         testname="HTTP Headers">
            <collectionProp name="HeaderManager.headers">
              <elementProp name="Content-Type" elementType="Header">
                <stringProp name="Header.name">Content-Type</stringProp>
                <stringProp name="Header.value">application/json</stringProp>
              </elementProp>
              <elementProp name="Ocp-Apim-Subscription-Key" elementType="Header">
                <stringProp name="Header.name">Ocp-Apim-Subscription-Key</stringProp>
                <stringProp name="Header.value">${SUBSCRIPTION_KEY}</stringProp>
              </elementProp>
              <elementProp name="X-Correlation-Id" elementType="Header">
                <stringProp name="Header.name">X-Correlation-Id</stringProp>
                <stringProp name="Header.value">${__UUID()}</stringProp>
              </elementProp>
            </collectionProp>
          </HeaderManager>

          <!-- Response Assertions -->
          <ResponseAssertion guiclass="AssertionGui" testclass="ResponseAssertion"
                             testname="Assert 2xx Response">
            <collectionProp name="Asserion.test_strings">
              <stringProp name="0">2</stringProp>
            </collectionProp>
            <intProp name="Assertion.test_type">1</intProp>
            <intProp name="Assertion.test_field">2</intProp>
          </ResponseAssertion>
        </hashTree>

        <!-- GET request to verify processing -->
        <HTTPSamplerProxy guiclass="HttpTestSampleGui" testclass="HTTPSamplerProxy"
                          testname="GET Order Status via APIM">
          <stringProp name="HTTPSampler.domain">${APIM_HOST}</stringProp>
          <intProp name="HTTPSampler.port">443</intProp>
          <stringProp name="HTTPSampler.protocol">https</stringProp>
          <stringProp name="HTTPSampler.path">${API_PATH}/${orderId}/status</stringProp>
          <stringProp name="HTTPSampler.method">GET</stringProp>
          <boolProp name="HTTPSampler.use_keepalive">true</boolProp>
        </HTTPSamplerProxy>
        <hashTree>
          <HeaderManager guiclass="HeaderPanel" testclass="HeaderManager"
                         testname="HTTP Headers">
            <collectionProp name="HeaderManager.headers">
              <elementProp name="Ocp-Apim-Subscription-Key" elementType="Header">
                <stringProp name="Header.name">Ocp-Apim-Subscription-Key</stringProp>
                <stringProp name="Header.value">${SUBSCRIPTION_KEY}</stringProp>
              </elementProp>
            </collectionProp>
          </HeaderManager>
        </hashTree>

        <!-- Think time between requests -->
        <ConstantTimer guiclass="ConstantTimerGui" testclass="ConstantTimer"
                       testname="Think Time">
          <stringProp name="ConstantTimer.delay">1000</stringProp>
        </ConstantTimer>

      </hashTree>
    </hashTree>
  </hashTree>
</jmeterTestPlan>

Test Data File

Create a CSV file at test-data/orders.csv with realistic test data:

orderId,customerName,product,quantity,region
ORD-10001,Contoso Ltd,Widget-A,25,UK-South
ORD-10002,Fabrikam Inc,Widget-B,10,UK-West
ORD-10003,Northwind Traders,Service-Plan-Pro,1,North-Europe
ORD-10004,Adventure Works,Widget-A,100,UK-South
ORD-10005,Woodgrove Bank,API-License-Enterprise,5,West-Europe
ORD-10006,Tailspin Toys,Widget-C,50,UK-South
ORD-10007,Alpine Ski House,Service-Plan-Basic,3,North-Europe
ORD-10008,Consolidated Messenger,Widget-B,75,UK-West
ORD-10009,Datum Corporation,Widget-A,30,West-Europe
ORD-10010,Litware Inc,API-License-Standard,2,UK-South
ORD-10011,Proseware Inc,Service-Plan-Pro,1,North-Europe
ORD-10012,VanArsdel Ltd,Widget-C,20,UK-West
ORD-10013,Trey Research,Widget-A,15,West-Europe
ORD-10014,Wide World Importers,Widget-B,200,UK-South
ORD-10015,Wingtip Toys,Service-Plan-Enterprise,1,North-Europe
ORD-10016,A Datum Corporation,Widget-A,45,UK-West
ORD-10017,Blue Yonder Airlines,API-License-Enterprise,10,West-Europe
ORD-10018,City Power and Light,Widget-C,60,UK-South
ORD-10019,Coho Vineyard,Widget-B,8,North-Europe
ORD-10020,Fourth Coffee,Service-Plan-Basic,2,UK-South

k6 Test Script

For teams who prefer a code-first approach, k6 provides excellent scripting flexibility. Save this as apim-logic-apps-load-test.js:

import http from 'k6/http';
import { check, sleep, group } from 'k6';
import { Rate, Trend, Counter } from 'k6/metrics';
import { SharedArray } from 'k6/data';

// Custom metrics
const errorRate = new Rate('error_rate');
const orderCreationDuration = new Trend('order_creation_duration', true);
const orderStatusDuration = new Trend('order_status_duration', true);
const endToEndDuration = new Trend('end_to_end_duration', true);
const successfulOrders = new Counter('successful_orders');

// Load test data from CSV
const orders = new SharedArray('orders', function () {
  return [
    { orderId: 'ORD-10001', customerName: 'Contoso Ltd', product: 'Widget-A', quantity: 25, region: 'UK-South' },
    { orderId: 'ORD-10002', customerName: 'Fabrikam Inc', product: 'Widget-B', quantity: 10, region: 'UK-West' },
    { orderId: 'ORD-10003', customerName: 'Northwind Traders', product: 'Service-Plan-Pro', quantity: 1, region: 'North-Europe' },
    { orderId: 'ORD-10004', customerName: 'Adventure Works', product: 'Widget-A', quantity: 100, region: 'UK-South' },
    { orderId: 'ORD-10005', customerName: 'Woodgrove Bank', product: 'API-License-Enterprise', quantity: 5, region: 'West-Europe' },
    { orderId: 'ORD-10006', customerName: 'Tailspin Toys', product: 'Widget-C', quantity: 50, region: 'UK-South' },
    { orderId: 'ORD-10007', customerName: 'Alpine Ski House', product: 'Service-Plan-Basic', quantity: 3, region: 'North-Europe' },
    { orderId: 'ORD-10008', customerName: 'Consolidated Messenger', product: 'Widget-B', quantity: 75, region: 'UK-West' },
    { orderId: 'ORD-10009', customerName: 'Datum Corporation', product: 'Widget-A', quantity: 30, region: 'West-Europe' },
    { orderId: 'ORD-10010', customerName: 'Litware Inc', product: 'API-License-Standard', quantity: 2, region: 'UK-South' },
  ];
});

// Configuration — override with environment variables
const APIM_HOST = __ENV.APIM_HOST || 'your-apim.azure-api.net';
const SUBSCRIPTION_KEY = __ENV.SUBSCRIPTION_KEY || '';
const API_PATH = __ENV.API_PATH || '/api/v1/orders';

// Test scenarios
export const options = {
  scenarios: {
    // Scenario 1: Baseline steady-state
    baseline: {
      executor: 'constant-arrival-rate',
      rate: 50,
      timeUnit: '1s',
      duration: '10m',
      preAllocatedVUs: 100,
      maxVUs: 200,
      startTime: '0s',
      tags: { scenario: 'baseline' },
    },
    // Scenario 2: Ramp to peak load
    peak_load: {
      executor: 'ramping-arrival-rate',
      startRate: 50,
      timeUnit: '1s',
      stages: [
        { duration: '2m', target: 50 },
        { duration: '5m', target: 200 },
        { duration: '5m', target: 200 },
        { duration: '3m', target: 50 },
      ],
      preAllocatedVUs: 300,
      maxVUs: 500,
      startTime: '12m',
      tags: { scenario: 'peak_load' },
    },
    // Scenario 3: Spike test
    spike: {
      executor: 'ramping-arrival-rate',
      startRate: 10,
      timeUnit: '1s',
      stages: [
        { duration: '30s', target: 10 },
        { duration: '10s', target: 300 },
        { duration: '1m', target: 300 },
        { duration: '10s', target: 10 },
        { duration: '1m', target: 10 },
      ],
      preAllocatedVUs: 400,
      maxVUs: 600,
      startTime: '30m',
      tags: { scenario: 'spike' },
    },
  },
  thresholds: {
    http_req_failed: ['rate<0.05'],              // < 5% errors
    http_req_duration: ['p(95)<5000'],            // p95 < 5s
    order_creation_duration: ['p(95)<3000'],      // p95 < 3s for order creation
    order_status_duration: ['p(95)<1000'],        // p95 < 1s for status check
    end_to_end_duration: ['p(95)<5000'],          // p95 < 5s end-to-end
    error_rate: ['rate<0.05'],                    // < 5% error rate
  },
};

export default function () {
  const order = orders[Math.floor(Math.random() * orders.length)];
  const correlationId = `lt-${Date.now()}-${Math.random().toString(36).substring(7)}`;
  const uniqueOrderId = `${order.orderId}-${Date.now()}`;

  const headers = {
    'Content-Type': 'application/json',
    'Ocp-Apim-Subscription-Key': SUBSCRIPTION_KEY,
    'X-Correlation-Id': correlationId,
  };

  const startTime = Date.now();

  group('Order Processing Flow', function () {
    // Step 1: POST order through APIM to Logic App
    const payload = JSON.stringify({
      orderId: uniqueOrderId,
      customerName: order.customerName,
      product: order.product,
      quantity: order.quantity,
      region: order.region,
      timestamp: new Date().toISOString(),
      correlationId: correlationId,
    });

    const createRes = http.post(
      `https://${APIM_HOST}${API_PATH}`,
      payload,
      { headers: headers, tags: { operation: 'create_order' } }
    );

    orderCreationDuration.add(createRes.timings.duration);

    const createSuccess = check(createRes, {
      'POST returns 200 or 202': (r) => r.status === 200 || r.status === 202,
      'POST response has body': (r) => r.body && r.body.length > 0,
      'POST response time < 3s': (r) => r.timings.duration < 3000,
    });

    errorRate.add(!createSuccess);
    if (createSuccess) {
      successfulOrders.add(1);
    }

    // Brief pause to allow Logic App processing
    sleep(1);

    // Step 2: GET order status to verify processing
    const statusRes = http.get(
      `https://${APIM_HOST}${API_PATH}/${uniqueOrderId}/status`,
      { headers: headers, tags: { operation: 'get_status' } }
    );

    orderStatusDuration.add(statusRes.timings.duration);

    check(statusRes, {
      'GET returns 200': (r) => r.status === 200,
      'GET response time < 1s': (r) => r.timings.duration < 1000,
    });
  });

  endToEndDuration.add(Date.now() - startTime);

  // Think time — simulate real user behaviour
  sleep(Math.random() * 2 + 0.5);
}

// Generate HTML report summary
export function handleSummary(data) {
  return {
    'load-test-results.json': JSON.stringify(data, null, 2),
    stdout: textSummary(data, { indent: ' ', enableColors: true }),
  };
}

function textSummary(data, opts) {
  // k6 built-in summary handles this
  return '';
}

Azure DevOps Pipeline Integration

Automate load testing in your CI/CD pipeline:

trigger: none

pool:
  vmImage: 'ubuntu-latest'

parameters:
  - name: environment
    displayName: 'Target Environment'
    type: string
    default: 'staging'
    values:
      - staging
      - uat
      - production
  - name: testScenario
    displayName: 'Test Scenario'
    type: string
    default: 'baseline'
    values:
      - baseline
      - peak_load
      - spike
      - soak

variables:
  - group: 'LoadTest-${{ parameters.environment }}'
  - name: loadTestResourceGroup
    value: 'rg-loadtesting-${{ parameters.environment }}'
  - name: loadTestResource
    value: 'alt-apim-loadtest-${{ parameters.environment }}'

stages:
  - stage: LoadTest
    displayName: 'Run Load Test'
    jobs:
      - job: RunLoadTest
        displayName: 'Execute Load Test'
        timeoutInMinutes: 120
        steps:
          - task: AzureLoadTest@1
            displayName: 'Run APIM Load Test'
            inputs:
              azureSubscription: 'Azure-ServiceConnection'
              loadTestConfigFile: 'load-tests/config.yaml'
              loadTestResource: $(loadTestResource)
              resourceGroup: $(loadTestResourceGroup)
              env: |
                [
                  { "name": "apim_host", "value": "$(APIM_HOST)" },
                  { "name": "subscription_key", "value": "$(APIM_SUBSCRIPTION_KEY)" },
                  { "name": "api_path", "value": "$(API_PATH)" }
                ]

          - task: PublishTestResults@2
            displayName: 'Publish Load Test Results'
            condition: always()
            inputs:
              testResultsFormat: 'JUnit'
              testResultsFiles: '**/results/*.xml'
              testRunTitle: 'Load Test - ${{ parameters.environment }} - ${{ parameters.testScenario }}'

          - script: |
              echo "## Load Test Summary" >> $(Build.SourcesDirectory)/test-summary.md
              echo "- Environment: ${{ parameters.environment }}" >> $(Build.SourcesDirectory)/test-summary.md
              echo "- Scenario: ${{ parameters.testScenario }}" >> $(Build.SourcesDirectory)/test-summary.md
              echo "- Date: $(date -u +%Y-%m-%dT%H:%M:%SZ)" >> $(Build.SourcesDirectory)/test-summary.md
            displayName: 'Generate Test Summary'
            condition: always()

          - task: PublishBuildArtifacts@1
            displayName: 'Publish Test Artifacts'
            condition: always()
            inputs:
              PathtoPublish: '$(Build.SourcesDirectory)/load-tests/results'
              ArtifactName: 'LoadTestResults'

Azure Load Testing Configuration File

Save as load-tests/config.yaml:

version: v0.1
testId: apim-logic-apps-e2e
testName: APIM to Logic Apps End-to-End Load Test
testPlan: apim-logic-apps-load-test.jmx
description: Enterprise load test covering API Management gateway through to Logic App workflows
engineInstances: 5
configurationFiles:
  - test-data/orders.csv
failureCriteria:
  - avg(response_time_ms) > 3000
  - percentage(error) > 5
  - p95(response_time_ms) > 5000
autoStop:
  errorPercentage: 80
  timeWindow: 60

Logic App Concurrency and Throttling

Understanding Logic App Limits

Before load testing, understand the limits you are testing against:

Limit Consumption Standard (WS1) Standard (WS2) Standard (WS3)
Concurrent trigger runs 100 Based on host config Based on host config Based on host config
Concurrent actions 100 500 500 500
Runs per 5 minutes 100,000 Unlimited Unlimited Unlimited
HTTP inbound requests/min 6,000 Based on App Service plan Based on App Service plan Based on App Service plan
HTTP outbound requests/min 6,000 Based on App Service plan Based on App Service plan Based on App Service plan
Run duration (max) 90 days 90 days 90 days 90 days
Action execution timeout 120 sec 120 sec 120 sec 120 sec

Microsoft Reference: Logic Apps limits and configuration

Configure Concurrency for Load Testing

For Standard Logic Apps, configure the host.json to handle the target load:

{
  "version": "2.0",
  "extensionBundle": {
    "id": "Microsoft.Azure.Functions.ExtensionBundle.Workflows",
    "version": "[1.*, 2.0.0)"
  },
  "extensions": {
    "workflow": {
      "settings": {
        "Runtime.FlowRunRetryableActionJobCallback.MaximumRetries": "3",
        "Runtime.Backend.FlowRunTimeout": "00:15:00"
      }
    }
  },
  "concurrency": {
    "dynamicConcurrencyEnabled": true,
    "snapshotPersistenceEnabled": true
  }
}

For individual workflow triggers, set concurrency in the trigger definition:

{
  "triggers": {
    "When_a_HTTP_request_is_received": {
      "type": "Request",
      "kind": "Http",
      "operationOptions": "DisableAsyncPattern",
      "runtimeConfiguration": {
        "concurrency": {
          "runs": 100
        }
      }
    }
  }
}

APIM Rate Limiting for Load Tests

Temporary Policy Adjustment

During load testing, you may need to relax rate limits. Use a dedicated APIM product for load testing:

<!-- Load Testing Product Policy -->
<policies>
    <inbound>
        <base />
        <!-- Higher rate limits for load testing -->
        <rate-limit-by-key
            calls="1000"
            renewal-period="60"
            counter-key="@(context.Subscription.Id)" />
        <quota-by-key
            calls="100000"
            renewal-period="3600"
            counter-key="@(context.Subscription.Id)" />
        <!-- Tag requests for identification in logs -->
        <set-header name="X-Load-Test" exists-action="override">
            <value>true</value>
        </set-header>
        <set-header name="X-Test-Timestamp" exists-action="override">
            <value>@(DateTime.UtcNow.ToString("o"))</value>
        </set-header>
    </inbound>
    <backend>
        <base />
    </backend>
    <outbound>
        <base />
    </outbound>
    <on-error>
        <base />
    </on-error>
</policies>

Identifying Load Test Traffic

Add a diagnostic policy to separate load test traffic in analytics:

<inbound>
    <choose>
        <when condition="@(context.Request.Headers.GetValueOrDefault("X-Load-Test", "") == "true")">
            <set-variable name="isLoadTest" value="true" />
            <trace source="load-test" severity="information">
                <message>Load test request received</message>
                <metadata name="correlationId"
                          value="@(context.Request.Headers.GetValueOrDefault("X-Correlation-Id", ""))" />
            </trace>
        </when>
    </choose>
</inbound>

Monitoring During Tests

KQL Queries for Real-Time Analysis

Run these queries in Log Analytics during load test execution:

Request Throughput (Live)

ApiManagementGatewayLogs
| where TimeGenerated > ago(15m)
| summarize RequestCount = count(),
            ErrorCount = countif(ResponseCode >= 400),
            AvgDuration = avg(TotalTime),
            P95Duration = percentile(TotalTime, 95)
  by bin(TimeGenerated, 1m)
| render timechart

Logic App Run Performance

// Monitor Logic App execution during load test
AzureDiagnostics
| where ResourceProvider == "MICROSOFT.LOGIC"
  or ResourceProvider == "MICROSOFT.WEB"
| where TimeGenerated > ago(15m)
| where OperationName has "workflow"
| summarize RunCount = count(),
            SuccessCount = countif(status_s == "Succeeded"),
            FailCount = countif(status_s == "Failed"),
            AvgDuration = avg(duration_d)
  by bin(TimeGenerated, 1m)
| extend SuccessRate = round(toreal(SuccessCount) / RunCount * 100, 2)
| render timechart

APIM Capacity During Load

AzureMetrics
| where ResourceProvider == "MICROSOFT.APIMANAGEMENT"
| where MetricName == "Capacity"
| where TimeGenerated > ago(30m)
| summarize AvgCapacity = avg(Average),
            MaxCapacity = max(Maximum)
  by bin(TimeGenerated, 1m)
| render timechart

Error Breakdown

ApiManagementGatewayLogs
| where TimeGenerated > ago(15m)
| where ResponseCode >= 400
| summarize Count = count() by ResponseCode, ApiId
| order by Count desc

Interpreting Results

Key Metrics to Analyse

Metric Healthy Warning Critical
APIM Capacity < 60% 60–80% > 80%
p95 Latency < 1s 1–3s > 3s
Error Rate < 0.5% 0.5–2% > 2%
Logic App Throttling None Occasional 429s Sustained 429s
Backend Timeouts None < 1% > 1%
Logic App Queue Depth < 100 100–500 > 500

Common Bottlenecks

Bottleneck Symptom Resolution
APIM capacity saturation Capacity > 80%, increased latency Scale up APIM tier or add units
Logic App throttling HTTP 429 responses Increase trigger concurrency, scale App Service plan
Backend timeouts HTTP 504 from APIM Optimise backend, increase timeout policies
Connection pool exhaustion Intermittent connection failures Enable connection pooling in Logic App connectors
Policy overhead High latency but low backend time Simplify policies, cache token validation
Cold start (Consumption) First requests slow after idle Use Standard plan, implement warming

Best Practices

Test Environment

  1. Use a dedicated APIM product and subscription for load tests — do not share with functional testing
  2. Mirror production configuration — same APIM tier, same Logic App plan, same backend setup
  3. Isolate load test traffic with headers and dedicated subscriptions for clean analytics
  4. Never load test against production without explicit approval and a maintenance window

Test Execution

  1. Start with a baseline — establish normal performance before testing limits
  2. Ramp gradually — do not jump to peak load; ramp over 2–5 minutes minimum
  3. Monitor server-side metrics alongside client-side results for the full picture
  4. Run from multiple regions if your consumers are geographically distributed
  5. Include think time between requests to simulate realistic user behaviour
  6. Use realistic payloads — CSV data with varied sizes and content types

Analysis and Reporting

  1. Compare against acceptance criteria defined before testing, not after
  2. Look at percentiles (p95, p99) not averages — averages hide tail latency
  3. Correlate client and server metrics — high client latency with low server latency indicates network issues
  4. Save all results as pipeline artifacts for trend analysis across releases
  5. Document findings with specific recommendations and ticket numbers for follow-up

Continuous Testing

  1. Automate load tests in your CI/CD pipeline for every release to staging
  2. Set automated pass/fail gates based on your acceptance criteria
  3. Track performance trends across releases to catch regressions early
  4. Revalidate after infrastructure changes — tier changes, policy updates, new backends

Official Microsoft Resources