← Back to Guides

Log Analytics Workspace Setup and Management

BeginnerApplication Insights & Monitoring2026-03-14

What Is a Log Analytics Workspace?

A Log Analytics workspace is the central data store for Azure Monitor logs. It collects, stores, and provides query access to log and performance data from Azure resources, on-premises servers, and third-party services. Every workspace has its own data repository, retention policy, and access control.

Official Documentation: Log Analytics workspace overview

Why Log Analytics Matters

Capability Description
Centralised logging Single place to query logs from all Azure services
KQL queries Powerful query language for analysis and troubleshooting
Alerts Trigger alerts based on log query results
Dashboards Pin query results to Azure dashboards
Workbooks Build interactive reports and visualisations
Cross-resource queries Query across multiple workspaces and App Insights
Data export Stream data to Storage, Event Hubs, or external systems
Sentinel integration Foundation for Azure Sentinel (SIEM)

Workspace Architecture

Single vs Multiple Workspaces

Single Workspace

One workspace for everything
ProsSimple cross-resource queries, easier management
ConsHarder to separate access, single retention policy
📁

Per-Environment

Separate workspace per environment
ProsEnvironment isolation, different retention/costs
ConsCross-environment queries require workspace() function
👥

Per-Team/Function

Separate workspace per team or function
ProsGranular access control
ConsComplex to manage, cross-team queries harder

Recommended Design

┌─────────────────────────────────────────────────┐
│  log-integration-prod (Production)              │
│  ├── Application Insights (appi-integration)    │
│  ├── Logic Apps (diagnostic settings)           │
│  ├── API Management (diagnostic settings)       │
│  ├── Azure Firewall (diagnostic settings)       │
│  ├── Application Gateway (diagnostic settings)  │
│  └── Azure Functions (diagnostic settings)      │
├─────────────────────────────────────────────────┤
│  log-integration-dev (Non-production)           │
│  ├── Dev + Test Application Insights            │
│  ├── Dev + Test Logic Apps                      │
│  └── Dev + Test APIM                            │
├─────────────────────────────────────────────────┤
│  log-security-prod (Security / Sentinel)        │
│  ├── Azure AD sign-in logs                      │
│  ├── Azure Firewall threat intel logs           │
│  └── Security Center alerts                     │
└─────────────────────────────────────────────────┘

Best practice: Use one workspace per environment (prod/non-prod) with a separate workspace for security/Sentinel.

Creating a Log Analytics Workspace

Bicep

param location string = resourceGroup().location
param environment string

resource logAnalyticsWorkspace 'Microsoft.OperationalInsights/workspaces@2022-10-01' = {
  name: 'log-integration-${environment}'
  location: location
  properties: {
    sku: {
      name: 'PerGB2018'
    }
    retentionInDays: environment == 'prod' ? 90 : 30
    features: {
      enableLogAccessUsingOnlyResourcePermissions: true
    }
    workspaceCapping: {
      dailyQuotaGb: environment == 'prod' ? 10 : 1
    }
    publicNetworkAccessForIngestion: 'Enabled'
    publicNetworkAccessForQuery: 'Enabled'
  }
  tags: {
    environment: environment
    managedBy: 'bicep'
  }
}

output workspaceId string = logAnalyticsWorkspace.id
output workspaceName string = logAnalyticsWorkspace.name
output customerId string = logAnalyticsWorkspace.properties.customerId

Azure CLI

# Create workspace
az monitor log-analytics workspace create \
  --resource-group rg-monitoring-prod \
  --workspace-name log-integration-prod \
  --location uksouth \
  --retention-time 90 \
  --sku PerGB2018 \
  --daily-quota-gb 10

# Get workspace details
az monitor log-analytics workspace show \
  --resource-group rg-monitoring-prod \
  --workspace-name log-integration-prod \
  --query '{Id:id, CustomerId:customerId, Retention:retentionInDays, DailyQuota:workspaceCapping.dailyQuotaGb}'

Data Collection

Diagnostic Settings

Send logs from Azure resources to your workspace using diagnostic settings:

// Generic pattern for any Azure resource
resource diagnosticSettings 'Microsoft.Insights/diagnosticSettings@2021-05-01-preview' = {
  name: '${resourceName}-diagnostics'
  scope: targetResource
  properties: {
    workspaceId: logAnalyticsWorkspace.id
    logs: [
      {
        categoryGroup: 'allLogs'
        enabled: true
        retentionPolicy: { enabled: true, days: 90 }
      }
    ]
    metrics: [
      {
        category: 'AllMetrics'
        enabled: true
        retentionPolicy: { enabled: true, days: 30 }
      }
    ]
  }
}

Common Resource Diagnostic Categories

Resource Key Log Categories
Logic Apps WorkflowRuntime
API Management GatewayLogs, WebSocketConnectionLogs
Application Gateway AccessLog, PerformanceLog, FirewallLog
Azure Firewall AZFWNetworkRule, AZFWApplicationRule, AZFWThreatIntel
Key Vault AuditEvent
Service Bus OperationalLogs
Storage Account StorageRead, StorageWrite, StorageDelete
Azure SQL SQLSecurityAuditEvents, QueryStoreRuntimeStatistics

Enable Diagnostics via CLI

# Enable diagnostics for a Logic App
az monitor diagnostic-settings create \
  --name la-diagnostics \
  --resource /subscriptions/{sub}/resourceGroups/{rg}/providers/Microsoft.Logic/workflows/la-order-processing \
  --workspace log-integration-prod \
  --resource-group rg-monitoring-prod \
  --logs '[{"category":"WorkflowRuntime","enabled":true,"retentionPolicy":{"enabled":true,"days":90}}]' \
  --metrics '[{"category":"AllMetrics","enabled":true,"retentionPolicy":{"enabled":true,"days":30}}]'

# List diagnostic settings for a resource
az monitor diagnostic-settings list \
  --resource /subscriptions/{sub}/resourceGroups/{rg}/providers/Microsoft.Logic/workflows/la-order-processing

Data Collection Rules (DCR)

For more advanced scenarios, use Data Collection Rules to transform and filter data before ingestion:

resource dataCollectionRule 'Microsoft.Insights/dataCollectionRules@2022-06-01' = {
  name: 'dcr-custom-logs-${environment}'
  location: location
  properties: {
    dataFlows: [
      {
        streams: [ 'Custom-MyAppLogs_CL' ]
        destinations: [ 'logAnalytics' ]
        transformKql: 'source | where severity != "Debug"'   // Filter out debug logs
        outputStream: 'Custom-MyAppLogs_CL'
      }
    ]
    destinations: {
      logAnalytics: [
        {
          name: 'logAnalytics'
          workspaceResourceId: logAnalyticsWorkspace.id
        }
      ]
    }
    dataCollectionEndpointId: dataCollectionEndpoint.id
  }
}

Access Control

Access Modes

Mode Description Configuration
Workspace-context Users access all data in the workspace they have permissions to Default mode
Resource-context Users access logs only for resources they have Azure RBAC access to Enabled via enableLogAccessUsingOnlyResourcePermissions

Recommendation: Use resource-context mode so that users can only query logs for resources they already have access to in Azure.

Built-in Roles

Role Description
Log Analytics Reader Read all monitoring data, query logs
Log Analytics Contributor Read + configure monitoring settings
Monitoring Reader Read monitoring data across Azure Monitor
Monitoring Contributor Read + write monitoring settings

Assign Access

# Grant Log Analytics Reader to a group
az role assignment create \
  --assignee-object-id {group-id} \
  --role "Log Analytics Reader" \
  --scope /subscriptions/{sub}/resourceGroups/rg-monitoring-prod/providers/Microsoft.OperationalInsights/workspaces/log-integration-prod

# Grant resource-context access (users query logs for resources they have access to)
az role assignment create \
  --assignee-object-id {group-id} \
  --role "Monitoring Reader" \
  --scope /subscriptions/{sub}/resourceGroups/rg-integration-prod

Table-Level RBAC

Restrict access to specific tables within a workspace:

# Grant read access to only the AppRequests table
az role assignment create \
  --assignee-object-id {group-id} \
  --role "Log Analytics Reader" \
  --scope "/subscriptions/{sub}/resourceGroups/rg-monitoring-prod/providers/Microsoft.OperationalInsights/workspaces/log-integration-prod/tables/AppRequests"

Retention and Archiving

Retention Tiers

Tier Description Cost
Interactive retention Full query capability, hot storage Included in ingestion cost (first 30 days free)
Archive Low-cost, long-term storage — limited query via search jobs ~90% cheaper than interactive
Total retention Combined interactive + archive period Up to 12 years

Configure Retention

// Workspace-level default retention
resource logAnalyticsWorkspace 'Microsoft.OperationalInsights/workspaces@2022-10-01' = {
  // ...
  properties: {
    retentionInDays: 90   // Interactive retention (30–730 days)
  }
}
# Set table-level retention (override workspace default)
az monitor log-analytics workspace table update \
  --resource-group rg-monitoring-prod \
  --workspace-name log-integration-prod \
  --name AzureDiagnostics \
  --retention-time 90 \
  --total-retention-time 365   // Archive for up to 1 year total

# Set archive retention for security logs
az monitor log-analytics workspace table update \
  --resource-group rg-monitoring-prod \
  --workspace-name log-integration-prod \
  --name SecurityEvent \
  --retention-time 90 \
  --total-retention-time 730   // Archive for 2 years total

Search Jobs (Query Archived Data)

# Create a search job to query archived data
az monitor log-analytics workspace table search-job create \
  --resource-group rg-monitoring-prod \
  --workspace-name log-integration-prod \
  --name SearchJob_SecurityAudit_20260314 \
  --search-query "SecurityEvent | where EventID == 4625" \
  --start-search-time "2025-01-01T00:00:00Z" \
  --end-search-time "2025-12-31T23:59:59Z" \
  --limit 10000

Data Export

Continuous Export to Storage

resource dataExport 'Microsoft.OperationalInsights/workspaces/dataExports@2020-08-01' = {
  parent: logAnalyticsWorkspace
  name: 'export-to-storage'
  properties: {
    destination: {
      resourceId: storageAccount.id
    }
    tableNames: [
      'AzureDiagnostics'
      'AppRequests'
      'AppExceptions'
    ]
    enable: true
  }
}

Export to Event Hubs

resource dataExportEventHub 'Microsoft.OperationalInsights/workspaces/dataExports@2020-08-01' = {
  parent: logAnalyticsWorkspace
  name: 'export-to-eventhub'
  properties: {
    destination: {
      resourceId: eventHubNamespace.id
    }
    tableNames: [
      'AzureDiagnostics'
    ]
    enable: true
  }
}

Cost Management

Pricing Tiers

Tier Description Best For
Pay-As-You-Go ~£2.30 per GB ingested < 100 GB/day
Commitment Tier 100 100 GB/day at discounted rate 100–200 GB/day
Commitment Tier 200 200 GB/day at further discount 200–500 GB/day
Commitment Tier 500+ Higher tiers available > 500 GB/day

Monitor Ingestion Volume

// Daily ingestion by table
Usage
| where TimeGenerated > ago(30d)
| summarize IngestedGB = round(sum(Quantity) / 1024, 2) by DataType, bin(TimeGenerated, 1d)
| order by IngestedGB desc

// Top tables by volume
Usage
| where TimeGenerated > ago(7d)
| summarize TotalGB = round(sum(Quantity) / 1024, 2) by DataType
| order by TotalGB desc
| take 10

Cost Reduction Strategies

Strategy Description
Daily cap Set a daily ingestion limit to prevent cost spikes
Table-level retention Shorter retention for noisy, low-value tables
Sampling Reduce Application Insights telemetry volume
Diagnostic filtering Send only needed log categories
Archive tier Move old data to cheaper archive storage
Commitment tiers Discounts at predictable daily volumes
Data Collection Rules Transform and filter data before ingestion

Set Daily Cap

az monitor log-analytics workspace update \
  --resource-group rg-monitoring-prod \
  --workspace-name log-integration-prod \
  --daily-quota-gb 10

Alert on Approaching Daily Cap

resource dailyCapAlert 'Microsoft.Insights/scheduledQueryRules@2023-03-15-preview' = {
  name: 'alert-daily-cap-80pct'
  location: location
  properties: {
    displayName: 'Log Analytics daily cap at 80%'
    severity: 2
    enabled: true
    evaluationFrequency: 'PT1H'
    windowSize: 'PT1H'
    scopes: [ logAnalyticsWorkspace.id ]
    criteria: {
      allOf: [
        {
          query: '''
            Usage
            | where TimeGenerated > startofday(now())
            | summarize TodayGB = sum(Quantity) / 1024
            | where TodayGB > 8    // 80% of 10 GB daily cap
          '''
          timeAggregation: 'Count'
          operator: 'GreaterThan'
          threshold: 0
        }
      ]
    }
    actions: {
      actionGroups: [ actionGroup.id ]
    }
  }
}

Workspace Health Queries

Check Connected Resources

Heartbeat
| where TimeGenerated > ago(1h)
| summarize LastHeartbeat = max(TimeGenerated) by Computer, OSType, Category
| order by LastHeartbeat desc

Ingestion Latency

// Check how quickly data is being ingested
AppRequests
| where TimeGenerated > ago(1h)
| extend IngestionDelay = ingestion_time() - TimeGenerated
| summarize
    AvgDelay = avg(IngestionDelay),
    P95Delay = percentile(IngestionDelay, 95),
    MaxDelay = max(IngestionDelay)
  by bin(TimeGenerated, 5m)
| render timechart

Table Sizes

Usage
| where TimeGenerated > ago(1d)
| summarize SizeGB = round(sum(Quantity) / 1024, 2) by DataType
| order by SizeGB desc

Private Link (Network Isolation)

For secure, private connectivity to Log Analytics:

resource privateEndpoint 'Microsoft.Network/privateEndpoints@2023-09-01' = {
  name: 'pe-log-${environment}'
  location: location
  properties: {
    subnet: {
      id: privateEndpointSubnet.id
    }
    privateLinkServiceConnections: [
      {
        name: 'log-analytics-connection'
        properties: {
          privateLinkServiceId: logAnalyticsWorkspace.id
          groupIds: [ 'azuremonitor' ]
        }
      }
    ]
  }
}

Naming Conventions

log-{workload}-{environment}       // Log Analytics Workspace
dcr-{purpose}-{environment}       // Data Collection Rule

Examples:

  • log-integration-prod / log-integration-dev
  • log-security-prod
  • dcr-custom-logs-prod

Best Practices

  1. Use resource-context access mode to leverage existing Azure RBAC
  2. One workspace per environment (prod/non-prod) plus a security workspace
  3. Set daily ingestion caps to prevent unexpected cost spikes
  4. Use table-level retention to optimise costs — shorter retention for high-volume, low-value data
  5. Enable diagnostic settings on all Azure resources from deployment (Bicep/Terraform)
  6. Archive old data instead of deleting it — archive tier is significantly cheaper
  7. Monitor ingestion volume with alerts at 80% of your daily cap
  8. Use Data Collection Rules to filter and transform before ingestion
  9. Use commitment tiers when daily ingestion is predictable and above 100 GB
  10. Share workspaces across related services for easy cross-resource queries
  11. Use Private Link for workspaces processing sensitive data
  12. Tag workspaces consistently for cost tracking and governance

Official Microsoft Resources