
You're building a Power Query solution that pulls sales data from multiple regional APIs, transforms customer demographics from CSV files with inconsistent formats, and merges everything with a legacy database that sometimes goes offline. When you run your query, you get a cryptic "DataSource.Error" that crashes your entire refresh. Sound familiar?
This is where most Power Query developers hit a wall. Basic M language skills get you far, but production data pipelines require bulletproof error handling. The difference between a fragile query that breaks every few weeks and a robust one that runs reliably for years often comes down to how well you anticipate and handle errors.
Error handling in M isn't just about wrapping operations in try statements. It's about understanding M's evaluation model, designing graceful degradation strategies, and building queries that can adapt to changing data conditions. By the end of this lesson, you'll have the tools to build enterprise-grade Power Query solutions that handle errors intelligently rather than just failing silently.
What you'll learn:
This lesson assumes you're comfortable with intermediate M language concepts including custom functions, list and record operations, and basic Power Query transformations. You should understand M's functional programming principles and have experience debugging queries in the Advanced Editor.
Before diving into try-otherwise patterns, you need to understand how errors work in M's lazy evaluation system. Unlike imperative languages where errors occur immediately when a line executes, M only evaluates expressions when their results are actually needed. This has profound implications for error handling strategy.
Consider this seemingly simple query:
let
Source = Web.Contents("https://api.nonexistent-domain.com/data"),
ParsedJson = Json.Document(Source),
FirstRecord = ParsedJson{0},
Result = FirstRecord[CustomerName]
in
Result
You might expect this to fail immediately at the Web.Contents step, but M doesn't work that way. The error won't surface until Power Query tries to materialize the final result. This means your error handling needs to account for where evaluation actually happens, not just where you think it might fail.
Let's build a more realistic example that demonstrates this behavior:
let
// This function appears to work fine
GetCustomerData = (baseUrl as text) as table =>
let
RawData = Web.Contents(baseUrl & "/customers"),
JsonData = Json.Document(RawData),
ConvertedTable = Table.FromRecords(JsonData),
TypedTable = Table.TransformColumnTypes(ConvertedTable, {
{"CustomerId", Int64.Type},
{"Revenue", Currency.Type},
{"LastOrderDate", DateTime.Type}
})
in
TypedTable,
// Multiple data sources
PrimarySource = GetCustomerData("https://api.primary-system.com"),
SecondarySource = GetCustomerData("https://backup.secondary-system.com"),
// This is where errors might actually surface
CombinedData = Table.Combine({PrimarySource, SecondarySource})
in
CombinedData
In this scenario, if the primary API is down, you won't know until Table.Combine tries to evaluate PrimarySource. The error propagates up the evaluation chain, but by then you've lost the context of which specific data source failed.
This is why effective M error handling requires thinking about evaluation boundaries and designing your error handling around where evaluation actually occurs, not just where operations are defined.
The basic try expression in M returns a record with two fields: HasError (logical) and either Value or Error. Most developers learn the simple pattern:
try someOperation otherwise "default value"
But this barely scratches the surface. Let's build up the complete ecosystem of try-otherwise patterns you need for production systems.
First, understand what try actually returns:
let
SuccessfulOperation = try (1 + 1),
FailedOperation = try (1 / 0),
// Examine the structure
SuccessExample = SuccessfulOperation,
// Returns: [HasError = false, Value = 2]
ErrorExample = FailedOperation
// Returns: [HasError = true, Error = [Reason = "Expression.Error", Message = "...", Detail = ...]]
in
{SuccessExample, ErrorExample}
The Error record contains crucial debugging information:
Reason: A categorized error type (like "DataSource.Error", "Expression.Error") Message: Human-readable descriptionDetail: Additional context, often including the original operationInstead of generic fallbacks, you can handle different error types differently:
let
HandleApiCall = (url as text) as any =>
let
ApiResult = try Web.Contents(url),
Result = if ApiResult[HasError] then
// Check the specific error type
if ApiResult[Error][Reason] = "DataSource.Error" then
// Network/API issues - try backup source
try Web.Contents(Text.Replace(url, "api.", "backup-api."))
otherwise null
else if ApiResult[Error][Reason] = "WebContents.UnsupportedProtocol" then
// Protocol issues - might need authentication
error "Authentication required for: " & url
else
// Unknown error - propagate it
error ApiResult[Error]
else
ApiResult[Value]
in
Result
in
HandleApiCall
This pattern lets you implement sophisticated error recovery strategies. Network timeouts get retried with backup endpoints, authentication errors get re-thrown with better messages, and truly unexpected errors get properly propagated.
For complex queries processing multiple data sources, you need to collect and analyze errors rather than just handling them:
let
ProcessDataSources = (sourceUrls as list) as record =>
let
// Process each source and capture results/errors
ProcessedSources = List.Transform(sourceUrls, (url) =>
let
SourceName = Text.AfterDelimiter(url, "//"),
Result = try [
Data = Web.Contents(url),
ParsedData = Json.Document(Web.Contents(url)),
RecordCount = List.Count(Json.Document(Web.Contents(url)))
]
in
[
SourceName = SourceName,
Url = url,
Success = not Result[HasError],
Data = if Result[HasError] then null else Result[Value][ParsedData],
RecordCount = if Result[HasError] then 0 else Result[Value][RecordCount],
ErrorReason = if Result[HasError] then Result[Error][Reason] else null,
ErrorMessage = if Result[HasError] then Result[Error][Message] else null,
ProcessedAt = DateTime.LocalNow()
]
),
SuccessfulSources = List.Select(ProcessedSources, (source) => source[Success]),
FailedSources = List.Select(ProcessedSources, (source) => not source[Success]),
CombinedData = if List.Count(SuccessfulSources) > 0 then
Table.FromRecords(List.Transform(SuccessfulSources, (source) => source[Data]))
else
error "All data sources failed: " & Text.Combine(
List.Transform(FailedSources, (source) =>
source[SourceName] & " (" & source[ErrorReason] & ")"), ", ")
in
[
Data = CombinedData,
SuccessCount = List.Count(SuccessfulSources),
FailureCount = List.Count(FailedSources),
ErrorLog = Table.FromRecords(FailedSources),
ProcessingReport = [
TotalSources = List.Count(sourceUrls),
SuccessRate = List.Count(SuccessfulSources) / List.Count(sourceUrls),
CompletedAt = DateTime.LocalNow()
]
]
in
ProcessDataSources
This pattern gives you comprehensive error reporting while still producing usable results when possible. The error log becomes valuable for monitoring and improving your data pipeline over time.
Data source errors are the most common type you'll encounter in production Power Query solutions. These include network timeouts, authentication failures, API rate limiting, and service unavailability. Each requires different handling strategies.
APIs fail in predictable ways. Here's a robust pattern for handling common API error scenarios:
let
CallApiWithRetry = (baseUrl as text, endpoint as text, optional headers as record, optional maxRetries as number) as any =>
let
MaxAttempts = if maxRetries is null then 3 else maxRetries,
DefaultHeaders = [#"Content-Type" = "application/json"],
RequestHeaders = if headers is null then DefaultHeaders else DefaultHeaders & headers,
AttemptRequest = (attemptNumber as number) as any =>
let
Url = baseUrl & endpoint,
Request = try Web.Contents(Url, [Headers = RequestHeaders]),
Result = if Request[HasError] then
let
ErrorReason = Request[Error][Reason],
ErrorMessage = Request[Error][Message]
in
if ErrorReason = "DataSource.Error" and attemptNumber < MaxAttempts then
// Exponential backoff for retries
Function.InvokeAfter(() => AttemptRequest(attemptNumber + 1),
#duration(0, 0, 0, Number.Power(2, attemptNumber)))
else if Text.Contains(ErrorMessage, "429") or Text.Contains(ErrorMessage, "rate limit") then
// Rate limiting - longer delay
Function.InvokeAfter(() => AttemptRequest(attemptNumber + 1),
#duration(0, 0, 1, 0))
else if Text.Contains(ErrorMessage, "401") or Text.Contains(ErrorMessage, "403") then
error "Authentication failed for " & Url & ". Check API credentials."
else
error Request[Error]
else
Request[Value]
in
Result
in
AttemptRequest(1)
in
CallApiWithRetry
Performance Note:
Function.InvokeAftercreates actual delays in your query execution. Use it judiciously and consider whether retry logic belongs in M or in your broader data architecture.
Database connections fail for different reasons than APIs. Connection pooling, timeout settings, and transaction state all affect how you should handle errors:
let
ConnectToDatabase = (serverName as text, databaseName as text) as table =>
let
// Primary connection attempt
PrimaryConnection = try Sql.Database(serverName, databaseName, [
CommandTimeout = #duration(0, 0, 2, 0),
ConnectionTimeout = #duration(0, 0, 0, 30)
]),
Result = if PrimaryConnection[HasError] then
let
ErrorMessage = PrimaryConnection[Error][Message]
in
if Text.Contains(ErrorMessage, "timeout") then
// Retry with longer timeout
try Sql.Database(serverName, databaseName, [
CommandTimeout = #duration(0, 0, 10, 0),
ConnectionTimeout = #duration(0, 0, 2, 0)
])
otherwise error "Database " & databaseName & " unreachable after extended timeout"
else if Text.Contains(ErrorMessage, "login failed") or Text.Contains(ErrorMessage, "authentication") then
error "Database authentication failed. Check connection credentials for " & serverName
else if Text.Contains(ErrorMessage, "network") then
// Try read-only replica if available
try Sql.Database(serverName & "-replica", databaseName, [
CommandTimeout = #duration(0, 0, 2, 0),
ConnectionTimeout = #duration(0, 0, 0, 30)
])
otherwise error "Primary database and replica both unreachable: " & serverName
else
error PrimaryConnection[Error]
else
PrimaryConnection[Value]
in
Result
in
ConnectToDatabase
File-based data sources have their own error patterns, especially when dealing with SharePoint, network drives, or cloud storage:
let
LoadFileWithFallback = (primaryPath as text, optional backupPath as text) as table =>
let
GetFileExtension = (filePath as text) as text =>
Text.AfterDelimiter(filePath, ".", {0, RelativePosition.FromEnd}),
LoadByExtension = (filePath as text) as table =>
let
Extension = Text.Lower(GetFileExtension(filePath)),
LoadResult = try (
if Extension = "xlsx" then Excel.Workbook(File.Contents(filePath))
else if Extension = "csv" then Csv.Document(File.Contents(filePath))
else if Extension = "json" then Table.FromRecords(Json.Document(File.Contents(filePath)))
else error "Unsupported file type: " & Extension
)
in
if LoadResult[HasError] then error LoadResult[Error] else LoadResult[Value],
PrimaryAttempt = try LoadByExtension(primaryPath),
Result = if PrimaryAttempt[HasError] then
let
ErrorMessage = PrimaryAttempt[Error][Message],
ErrorReason = PrimaryAttempt[Error][Reason]
in
if ErrorReason = "DataSource.Error" and backupPath <> null then
// File access issues - try backup
try LoadByExtension(backupPath)
otherwise error "Both primary and backup files inaccessible: " & primaryPath
else if Text.Contains(ErrorMessage, "corrupted") or Text.Contains(ErrorMessage, "invalid format") then
error "File format error in " & primaryPath & ". File may be corrupted or incorrectly formatted."
else if Text.Contains(ErrorMessage, "permission") or Text.Contains(ErrorMessage, "access denied") then
error "Access denied to " & primaryPath & ". Check file permissions and authentication."
else
error PrimaryAttempt[Error]
else
PrimaryAttempt[Value]
in
Result
in
LoadFileWithFallback
Data transformation errors often occur deep in complex transformation chains. Unlike data source errors, these typically indicate data quality issues, schema changes, or assumptions that no longer hold. Your error handling strategy needs to balance data quality with pipeline reliability.
Type conversions fail when data doesn't match expected formats. Instead of failing the entire dataset, you often want to capture conversion errors and continue processing:
let
SafeTypeConversion = (inputTable as table, columnTransformations as list) as record =>
let
// Apply transformations with error tracking
TransformWithTracking = List.Accumulate(columnTransformations,
[Table = inputTable, ErrorLog = {}],
(state, transformation) =>
let
ColumnName = transformation[ColumnName],
TargetType = transformation[TargetType],
DefaultValue = transformation[DefaultValue]?,
// Create conversion function that tracks errors
ConversionFunction = (value) =>
let
ConversionResult = try (
if TargetType = Int64.Type then Int64.From(value)
else if TargetType = Number.Type then Number.From(value)
else if TargetType = DateTime.Type then DateTime.From(value)
else if TargetType = Date.Type then Date.From(value)
else if TargetType = Logical.Type then Logical.From(value)
else Text.From(value)
)
in
if ConversionResult[HasError] then
[Value = DefaultValue, HasError = true, OriginalValue = value,
ErrorReason = ConversionResult[Error][Reason]]
else
[Value = ConversionResult[Value], HasError = false, OriginalValue = value],
// Apply conversion to column
ConvertedTable = Table.TransformColumns(state[Table], {ColumnName, ConversionFunction}),
// Extract error information
ErrorRows = Table.SelectRows(ConvertedTable,
each Record.Field(_, ColumnName)[HasError] = true),
NewErrors = Table.TransformColumns(ErrorRows, {
ColumnName, (conversionResult) => [
RowIndex = null, // Would need to be calculated
ColumnName = ColumnName,
OriginalValue = conversionResult[OriginalValue],
TargetType = Type.ToText(TargetType),
ErrorReason = conversionResult[ErrorReason],
ProcessedAt = DateTime.LocalNow()
]
}),
// Clean the table to contain only actual values
CleanTable = Table.TransformColumns(ConvertedTable, {
ColumnName, (conversionResult) => conversionResult[Value]
})
in
[
Table = CleanTable,
ErrorLog = state[ErrorLog] & Table.ToRecords(NewErrors)
]
),
FinalResult = TransformWithTracking[Table],
AllErrors = TransformWithTracking[ErrorLog],
QualityMetrics = [
TotalRows = Table.RowCount(inputTable),
ErrorCount = List.Count(AllErrors),
ErrorRate = List.Count(AllErrors) / Table.RowCount(inputTable),
ColumnsProcessed = List.Count(columnTransformations),
ProcessedAt = DateTime.LocalNow()
]
in
[
Data = FinalResult,
ErrorLog = Table.FromRecords(AllErrors),
QualityMetrics = QualityMetrics
]
in
SafeTypeConversion
This pattern gives you granular control over type conversion failures while maintaining data pipeline reliability. You can set quality thresholds and decide whether to continue processing or fail based on error rates.
Real-world data sources change their schemas. Your transformations need to adapt gracefully:
let
AdaptiveColumnSelection = (inputTable as table, expectedColumns as list) as record =>
let
ActualColumns = Table.ColumnNames(inputTable),
ColumnAnalysis = List.Transform(expectedColumns, (expectedCol) =>
let
ColumnName = if expectedCol is text then expectedCol else expectedCol[Name],
Required = if expectedCol is text then true else (expectedCol[Required]? ?? true),
DefaultValue = if expectedCol is text then null else (expectedCol[DefaultValue]?),
IsPresent = List.Contains(ActualColumns, ColumnName),
Result = [
Name = ColumnName,
Required = Required,
Present = IsPresent,
DefaultValue = DefaultValue,
Status = if IsPresent then "Found"
else if Required then "Missing_Required"
else "Missing_Optional"
]
in
Result
),
MissingRequired = List.Select(ColumnAnalysis, (col) => col[Status] = "Missing_Required"),
MissingOptional = List.Select(ColumnAnalysis, (col) => col[Status] = "Missing_Optional"),
PresentColumns = List.Select(ColumnAnalysis, (col) => col[Status] = "Found"),
AdaptedTable = if List.Count(MissingRequired) > 0 then
error "Required columns missing: " & Text.Combine(
List.Transform(MissingRequired, (col) => col[Name]), ", ")
else
let
// Add missing optional columns with defaults
TableWithDefaults = List.Accumulate(MissingOptional, inputTable,
(tbl, col) => Table.AddColumn(tbl, col[Name],
each col[DefaultValue], type nullable any)),
// Select and reorder columns to match expected schema
ExpectedColumnNames = List.Transform(expectedColumns,
(col) => if col is text then col else col[Name]),
ReorderedTable = Table.SelectColumns(TableWithDefaults, ExpectedColumnNames)
in
ReorderedTable,
SchemaReport = [
ExpectedColumns = List.Count(expectedColumns),
PresentColumns = List.Count(PresentColumns),
MissingRequired = List.Count(MissingRequired),
MissingOptional = List.Count(MissingOptional),
AdditionalColumns = List.Count(ActualColumns) - List.Count(PresentColumns),
SchemaCompatibility = if List.Count(MissingRequired) = 0 then "Compatible" else "Incompatible"
]
in
[
Data = AdaptedTable,
SchemaAnalysis = Table.FromRecords(ColumnAnalysis),
SchemaReport = SchemaReport
]
in
AdaptiveColumnSelection
Complex transformations involving joins and aggregations can fail in subtle ways. You need patterns that detect and handle these failures:
let
SafeTableJoin = (leftTable as table, rightTable as table, joinColumns as list, joinKind as text) as record =>
let
// Validate join columns exist
LeftColumns = Table.ColumnNames(leftTable),
RightColumns = Table.ColumnNames(rightTable),
JoinValidation = List.Transform(joinColumns, (colName) =>
[
ColumnName = colName,
InLeftTable = List.Contains(LeftColumns, colName),
InRightTable = List.Contains(RightColumns, colName),
Valid = List.Contains(LeftColumns, colName) and List.Contains(RightColumns, colName)
]
),
InvalidColumns = List.Select(JoinValidation, (col) => not col[Valid]),
JoinResult = if List.Count(InvalidColumns) > 0 then
error "Join columns not found: " & Text.Combine(
List.Transform(InvalidColumns, (col) => col[ColumnName]), ", ")
else
let
// Perform the join with error handling
JoinAttempt = try Table.Join(leftTable, joinColumns, rightTable, joinColumns,
if joinKind = "inner" then JoinKind.Inner
else if joinKind = "left" then JoinKind.LeftOuter
else if joinKind = "right" then JoinKind.RightOuter
else JoinKind.FullOuter),
Result = if JoinAttempt[HasError] then
error "Join failed: " & JoinAttempt[Error][Message]
else
JoinAttempt[Value]
in
Result,
// Calculate join statistics
JoinStats = if JoinResult is table then
let
LeftRowCount = Table.RowCount(leftTable),
RightRowCount = Table.RowCount(rightTable),
ResultRowCount = Table.RowCount(JoinResult),
Stats = [
LeftTableRows = LeftRowCount,
RightTableRows = RightRowCount,
ResultRows = ResultRowCount,
JoinEfficiency = if LeftRowCount > 0 then ResultRowCount / LeftRowCount else 0,
JoinType = joinKind,
ProcessedAt = DateTime.LocalNow()
]
in
Stats
else
null
in
[
Data = JoinResult,
JoinValidation = Table.FromRecords(JoinValidation),
JoinStatistics = JoinStats
]
in
SafeTableJoin
Production Power Query solutions need comprehensive error logging and monitoring. Since M doesn't have native logging capabilities, you need to build these systems using table operations and data exports.
Here's a complete framework for capturing, categorizing, and storing error information:
let
ErrorLogger = [
// Initialize error collection
CreateErrorLog = () => Table.FromRecords({}),
// Log a single error with context
LogError = (errorLog as table, errorInfo as record) as table =>
let
EnrichedError = errorInfo & [
ErrorId = Text.NewGuid(),
Timestamp = DateTime.LocalNow(),
Severity = errorInfo[Severity]? ?? "Error",
Category = errorInfo[Category]? ?? "Unknown",
Source = errorInfo[Source]? ?? "M Query",
Context = errorInfo[Context]? ?? "No context provided"
],
UpdatedLog = Table.InsertRows(errorLog, Table.RowCount(errorLog), {EnrichedError})
in
UpdatedLog,
// Log errors from try operations
LogTryResult = (errorLog as table, tryResult as record, context as text) as table =>
let
NewLog = if tryResult[HasError] then
ErrorLogger[LogError](errorLog, [
Message = tryResult[Error][Message],
Reason = tryResult[Error][Reason],
Detail = Text.From(tryResult[Error][Detail]? ?? ""),
Context = context,
Category = "TryOperation",
Severity = "Error"
])
else
errorLog
in
NewLog,
// Generate error summary report
GenerateErrorSummary = (errorLog as table) as record =>
let
TotalErrors = Table.RowCount(errorLog),
ErrorsByCategory = Table.Group(errorLog, {"Category"},
{{"Count", each Table.RowCount(_), Int64.Type}}),
ErrorsBySeverity = Table.Group(errorLog, {"Severity"},
{{"Count", each Table.RowCount(_), Int64.Type}}),
RecentErrors = Table.SelectRows(errorLog,
each [Timestamp] >= DateTime.LocalNow() - #duration(1, 0, 0, 0)),
Summary = [
TotalErrors = TotalErrors,
RecentErrors = Table.RowCount(RecentErrors),
ErrorCategories = Table.ToRecords(ErrorsByCategory),
SeverityBreakdown = Table.ToRecords(ErrorsBySeverity),
LastErrorTime = if TotalErrors > 0 then
List.Max(Table.Column(errorLog, "Timestamp")) else null,
GeneratedAt = DateTime.LocalNow()
]
in
Summary
]
in
ErrorLogger
Beyond just logging errors, you need to monitor the overall health of your queries:
let
QueryHealthMonitor = (queryName as text) as function =>
let
MonitorFunction = (queryOperation as function) as record =>
let
StartTime = DateTime.LocalNow(),
ErrorLog = ErrorLogger[CreateErrorLog](),
ExecuteWithMonitoring = () =>
let
ExecutionResult = try queryOperation(),
ExecutionLog = ErrorLogger[LogTryResult](ErrorLog, ExecutionResult,
"Query execution: " & queryName),
EndTime = DateTime.LocalNow(),
Duration = EndTime - StartTime,
HealthMetrics = [
QueryName = queryName,
StartTime = StartTime,
EndTime = EndTime,
Duration = Duration,
Success = not ExecutionResult[HasError],
ErrorCount = Table.RowCount(ExecutionLog),
MemoryUsed = null, // M doesn't expose memory metrics
RowsProcessed = if ExecutionResult[HasError] then 0
else try Table.RowCount(ExecutionResult[Value]) otherwise null
],
Result = [
Data = if ExecutionResult[HasError] then null else ExecutionResult[Value],
Success = not ExecutionResult[HasError],
ErrorLog = ExecutionLog,
HealthMetrics = HealthMetrics,
ErrorSummary = ErrorLogger[GenerateErrorSummary](ExecutionLog)
]
in
Result
in
ExecuteWithMonitoring
in
MonitorFunction
in
QueryHealthMonitor
Error handling adds overhead to your queries. Here's how to measure and optimize that impact:
let
PerformanceBenchmark = (operation as function, iterations as number) as record =>
let
RunIteration = (iterationNumber) =>
let
StartTime = DateTime.LocalNow(),
Result = try operation(),
EndTime = DateTime.LocalNow(),
Duration = EndTime - StartTime
in
[
Iteration = iterationNumber,
Success = not Result[HasError],
Duration = Duration,
DurationMs = Duration * 24 * 60 * 60 * 1000
],
IterationResults = List.Transform(List.Numbers(1, iterations), RunIteration),
SuccessfulRuns = List.Select(IterationResults, (run) => run[Success]),
Stats = if List.Count(SuccessfulRuns) > 0 then
let
Durations = List.Transform(SuccessfulRuns, (run) => run[DurationMs]),
AvgDuration = List.Average(Durations),
MinDuration = List.Min(Durations),
MaxDuration = List.Max(Durations),
MedianDuration = List.Median(Durations)
in
[
Iterations = iterations,
SuccessfulRuns = List.Count(SuccessfulRuns),
SuccessRate = List.Count(SuccessfulRuns) / iterations,
AvgDurationMs = AvgDuration,
MinDurationMs = MinDuration,
MaxDurationMs = MaxDuration,
MedianDurationMs = MedianDuration,
StandardDeviation = List.StandardDeviation(Durations)
]
else
[
Iterations = iterations,
SuccessfulRuns = 0,
SuccessRate = 0,
Message = "All iterations failed"
]
in
[
Statistics = Stats,
DetailedResults = Table.FromRecords(IterationResults)
]
in
PerformanceBenchmark
Beyond basic try-otherwise patterns, production systems need sophisticated recovery strategies. These patterns help your queries adapt to changing conditions and degrade gracefully when things go wrong.
When external services are failing consistently, you don't want to keep hammering them. The circuit breaker pattern helps protect both your system and external services:
let
CreateCircuitBreaker = (serviceName as text, failureThreshold as number, timeoutMinutes as number) as record =>
let
// In a real implementation, this state would need to be persisted
// For this example, we'll use a simplified in-memory approach
CircuitState = [
ServiceName = serviceName,
FailureCount = 0,
LastFailureTime = null,
State = "Closed", // Closed, Open, HalfOpen
FailureThreshold = failureThreshold,
TimeoutDuration = #duration(0, 0, timeoutMinutes, 0)
],
CallService = (serviceFunction as function, currentState as record) as record =>
let
Now = DateTime.LocalNow(),
// Check if circuit should transition from Open to HalfOpen
ShouldTryHalfOpen = currentState[State] = "Open" and
currentState[LastFailureTime] <> null and
Now >= currentState[LastFailureTime] + currentState[TimeoutDuration],
NewState = if ShouldTryHalfOpen then
currentState & [State = "HalfOpen"] else currentState,
Result = if NewState[State] = "Open" then
[
Success = false,
Data = null,
Error = "Circuit breaker is OPEN for " & serviceName,
CircuitState = NewState
]
else
let
ServiceResult = try serviceFunction(),
UpdatedState = if ServiceResult[HasError] then
let
NewFailureCount = NewState[FailureCount] + 1,
CircuitShouldOpen = NewFailureCount >= NewState[FailureThreshold]
in
NewState & [
FailureCount = NewFailureCount,
LastFailureTime = Now,
State = if CircuitShouldOpen then "Open" else "Closed"
]
else
// Success - reset the circuit
NewState & [
FailureCount = 0,
LastFailureTime = null,
State = "Closed"
]
in
[
Success = not ServiceResult[HasError],
Data = if ServiceResult[HasError] then null else ServiceResult[Value],
Error = if ServiceResult[HasError] then ServiceResult[Error][Message] else null,
CircuitState = UpdatedState
]
in
Result
in
[
State = CircuitState,
Call = (serviceFunction as function) => CallService(serviceFunction, CircuitState)
]
in
CreateCircuitBreaker
When primary data sources fail, you often want to fall back to cached data, summary data, or alternative sources:
let
GracefulDegradationLoader = (config as record) as record =>
let
PrimarySource = config[PrimarySource],
FallbackSources = config[FallbackSources]? ?? {},
CacheSource = config[CacheSource]?,
MaxCacheAge = config[MaxCacheAge]? ?? #duration(1, 0, 0, 0),
LoadWithDegradation = () =>
let
// Try primary source first
PrimaryResult = try PrimarySource(),
Result = if not PrimaryResult[HasError] then
[
Data = PrimaryResult[Value],
Source = "Primary",
Quality = "Full",
Timestamp = DateTime.LocalNow()
]
else
// Try fallback sources
let
TryFallbacks = List.Transform(Record.FieldNames(FallbackSources), (sourceName) =>
let
SourceFunction = Record.Field(FallbackSources, sourceName),
SourceResult = try SourceFunction()
in
[
SourceName = sourceName,
Success = not SourceResult[HasError],
Data = if SourceResult[HasError] then null else SourceResult[Value],
Error = if SourceResult[HasError] then SourceResult[Error][Message] else null
]
),
SuccessfulFallback = List.First(
List.Select(TryFallbacks, (fb) => fb[Success]),
null
),
FallbackResult = if SuccessfulFallback <> null then
[
Data = SuccessfulFallback[Data],
Source = SuccessfulFallback[SourceName],
Quality = "Degraded",
Timestamp = DateTime.LocalNow()
]
else if CacheSource <> null then
// Try cache as last resort
let
CacheResult = try CacheSource(),
CacheTimestamp = try CacheResult[Value][Timestamp] otherwise DateTime.LocalNow() - #duration(10, 0, 0, 0),
CacheAge = DateTime.LocalNow() - CacheTimestamp,
CacheValid = CacheAge <= MaxCacheAge
in
if not CacheResult[HasError] and CacheValid then
[
Data = CacheResult[Value][Data],
Source = "Cache",
Quality = "Cached",
Timestamp = CacheTimestamp,
CacheAge = CacheAge
]
else
error "All data sources failed and cache is " &
(if CacheResult[HasError] then "unavailable" else "expired")
else
error "All data sources failed: " & PrimaryResult[Error][Message]
in
FallbackResult
in
Result
in
[
Load = LoadWithDegradation,
Config = config
]
in
GracefulDegradationLoader
Different types of errors require different retry strategies. Here's a comprehensive adaptive retry system:
let
AdaptiveRetryExecutor = (config as record) as function =>
let
DefaultConfig = [
MaxRetries = 3,
BaseDelayMs = 1000,
MaxDelayMs = 30000,
BackoffMultiplier = 2,
JitterPercent = 0.1,
RetryableErrors = {"DataSource.Error", "WebContents.Timeout", "Expression.Error"}
],
MergedConfig = DefaultConfig & config,
ExecuteWithRetry = (operation as function) as any =>
let
AttemptOperation = (attemptNumber as number, lastError as record) as any =>
let
ShouldRetry = attemptNumber <= MergedConfig[MaxRetries] and
(lastError = null or
List.Contains(MergedConfig[RetryableErrors], lastError[Reason])),
Result = if not ShouldRetry then
error lastError
else
let
// Add delay before retry (except first attempt)
DelayBeforeAttempt = if attemptNumber = 1 then null else
let
BaseDelay = MergedConfig[BaseDelayMs] *
Number.Power(MergedConfig[BackoffMultiplier], attemptNumber - 2),
CappedDelay = Number.Min(BaseDelay, MergedConfig[MaxDelayMs]),
// Add jitter to prevent thundering herd
Jitter = Number.RandomBetween(
-CappedDelay * MergedConfig[JitterPercent],
CappedDelay * MergedConfig[JitterPercent]
),
FinalDelay = Number.Max(0, CappedDelay + Jitter)
in
#duration(0, 0, 0, FinalDelay / 1000),
AttemptResult = try operation(),
FinalResult = if AttemptResult[HasError] and attemptNumber < MergedConfig[MaxRetries] then
AttemptOperation(attemptNumber + 1, AttemptResult[Error])
else if AttemptResult[HasError] then
error AttemptResult[Error] & [AttemptsMade = attemptNumber]
else
AttemptResult[Value]
in
FinalResult
in
Result
in
AttemptOperation(1, null)
in
ExecuteWithRetry
in
AdaptiveRetryExecutor
Let's build a complete data integration pipeline that demonstrates all the error handling patterns we've covered. This exercise simulates a real-world scenario where you're aggregating sales data from multiple sources with different reliability characteristics.
let
// Configuration for our data integration pipeline
PipelineConfig = [
Sources = [
PrimaryAPI = [
Url = "https://api.primary-sales.com/data",
Timeout = 30,
RetryCount = 3
],
BackupAPI = [
Url = "https://backup.sales-system.com/api/data",
Timeout = 45,
RetryCount = 2
],
LocalCache = [
FilePath = "C:\DataCache\sales_cache.json",
MaxAge = #duration(2, 0, 0, 0)
]
],
DataQuality = [
RequiredColumns = [
[Name = "TransactionId", Required = true, Type = Int64.Type],
[Name = "Amount", Required = true, Type = Currency.Type, DefaultValue = 0],
[Name = "Date", Required = true, Type = Date.Type],
[Name = "CustomerId", Required = false, Type = Text.Type, DefaultValue = "Unknown"]
],
MaxErrorRate = 0.05,
MinRowCount = 100
]
],
// Initialize our error handling framework
Logger = ErrorLogger,
ErrorLog = Logger[CreateErrorLog](),
// Create specialized data loaders
APILoader = (apiConfig as record, errorLog as table) as record =>
let
LoadOperation = () =>
let
WebResult = Web.Contents(apiConfig[Url], [
Timeout = #duration(0, 0, 0, apiConfig[Timeout])
]),
JsonData = Json.Document(WebResult),
TableData = Table.FromRecords(JsonData)
in
TableData,
// Use our adaptive retry system
RetryExecutor = AdaptiveRetryExecutor([
MaxRetries = apiConfig[RetryCount],
BaseDelayMs = 2000,
RetryableErrors = {"DataSource.Error", "WebContents.Timeout"}
]),
LoadResult = try RetryExecutor(LoadOperation),
UpdatedErrorLog = Logger[LogTryResult](errorLog, LoadResult,
"API Load: " & apiConfig[Url]),
Result = [
Success = not LoadResult[HasError],
Data = if LoadResult[HasError] then null else LoadResult[Value],
ErrorLog = UpdatedErrorLog,
LoadTime = DateTime.LocalNow()
]
in
Result,
// Enhanced cache loader
CacheLoader = (cacheConfig as record, errorLog as table) as record =>
let
LoadResult = try [
FileContents = File.Contents(cacheConfig[FilePath]),
JsonData = Json.Document(FileContents),
CacheData = JsonData[Data],
CacheTimestamp = DateTime.From(JsonData[Timestamp]),
TableData = Table.FromRecords(CacheData)
],
CacheAge = DateTime.LocalNow() - (LoadResult[Value][CacheTimestamp]? ??
DateTime.LocalNow() - #duration(10, 0, 0, 0)),
CacheValid = not LoadResult[HasError] and CacheAge <= cacheConfig[MaxAge],
UpdatedErrorLog = if not CacheValid then
Logger[LogError](errorLog, [
Message = if LoadResult[HasError] then LoadResult[Error][Message]
else "Cache expired",
Reason = if LoadResult[HasError] then LoadResult[Error][Reason]
else "CacheExpired",
Context = "Cache validation",
Category = "Cache",
Severity = "Warning"
])
else errorLog,
Result = [
Success = CacheValid,
Data = if CacheValid then LoadResult[Value][TableData] else null,
ErrorLog = UpdatedErrorLog,
CacheAge = CacheAge
]
in
Result,
// Main data loading with graceful degradation
LoadDataWithDegradation = () =>
let
// Try primary API
PrimaryResult = APILoader(PipelineConfig[Sources][PrimaryAPI], ErrorLog),
FinalResult = if PrimaryResult[Success] then
[
Data = PrimaryResult[Data],
Source = "Primary API",
Quality = "Full",
ErrorLog = PrimaryResult[ErrorLog]
]
else
// Try backup API
let
BackupResult = APILoader(PipelineConfig[Sources][BackupAPI], PrimaryResult[ErrorLog])
in
if BackupResult[Success] then
[
Data = BackupResult[Data],
Source = "Backup API",
Quality = "Degraded",
ErrorLog = BackupResult[ErrorLog]
]
else
// Fall back to cache
let
CacheResult = CacheLoader(PipelineConfig[Sources][LocalCache],
BackupResult[ErrorLog])
in
if CacheResult[Success] then
[
Data = CacheResult[Data],
Source = "Cache",
Quality = "Cached",
ErrorLog = CacheResult[ErrorLog],
CacheAge = CacheResult[CacheAge]
]
else
error "All data sources failed"
in
FinalResult,
// Load and process the data
DataLoadResult = try LoadDataWithDegradation(),
ProcessedData = if DataLoadResult[HasError] then
error DataLoadResult[Error]
else
let
RawData = DataLoadResult[Value][Data],
// Apply schema validation and adaptation
SchemaAdapter = AdaptiveColumnSelection,
SchemaResult = SchemaAdapter(RawData, PipelineConfig[DataQuality][RequiredColumns]),
// Apply type conversions with error tracking
TypeConverter = SafeTypeConversion,
ConversionSpecs = List.Transform(PipelineConfig[DataQuality][RequiredColumns],
(col) => [
ColumnName = col[Name],
TargetType = col[Type],
DefaultValue = col[DefaultValue]?
]),
ConversionResult = TypeConverter(SchemaResult[Data], ConversionSpecs),
// Quality validation
ErrorRate = ConversionResult[QualityMetrics][ErrorRate],
RowCount = ConversionResult[QualityMetrics][TotalRows],
QualityCheck = [
PassedErrorRate = ErrorRate <= PipelineConfig[DataQuality][MaxErrorRate],
PassedRowCount = RowCount >= PipelineConfig[DataQuality][MinRowCount],
OverallQuality = ErrorRate <= PipelineConfig[DataQuality][MaxErrorRate] and
RowCount >= PipelineConfig[DataQuality][MinRowCount]
],
FinalData = if QualityCheck[OverallQuality] then
ConversionResult[Data]
else
error "Data quality check failed: " &
(if not QualityCheck[PassedErrorRate] then
"Error rate " & Text.From(ErrorRate) & " exceeds threshold. " else "") &
(if not QualityCheck[PassedRowCount] then
"Row count " & Text.From(RowCount) & " below minimum. " else ""),
ProcessingReport = [
DataSource = DataLoadResult[Value][Source],
DataQuality = DataLoadResult[Value][Quality],
RowsProcessed = Table.RowCount(FinalData),
ConversionErrors = ConversionResult[QualityMetrics][ErrorCount],
ErrorRate = ErrorRate,
QualityPassed = QualityCheck[OverallQuality],
ProcessedAt = DateTime.LocalNow()
]
in
[
Data = FinalData,
ProcessingReport = ProcessingReport,
SchemaReport = SchemaResult[SchemaReport],
ErrorLog = ConversionResult[ErrorLog]
],
// Final result with comprehensive monitoring
PipelineResult = [
Success = ProcessedData is record,
Data = if ProcessedData is record then ProcessedData[Data] else null,
ProcessingReport = if ProcessedData is record then ProcessedData[ProcessingReport] else null,
ErrorSummary = Logger[GenerateErrorSummary](
if ProcessedData is record then ProcessedData[ErrorLog] else
Table.FromRecords({[
ErrorId = Text.NewGuid(),
Message = if DataLoadResult[HasError] then DataLoadResult[Error][Message] else "Unknown error",
Timestamp = DateTime.LocalNow(),
Category = "Pipeline",
Severity = "Critical"
]})
)
]
in
PipelineResult
This exercise demonstrates:
Practice modifying this pipeline by:
Even experienced M developers make predictable mistakes with error handling. Here are the most common issues and how to avoid them:
Problem: Using otherwise null or generic fallbacks that hide important errors:
// DON'T DO THIS
let
BadPattern = try Web.Contents("https://api.example.com") otherwise null,
ProcessedData = if BadPattern = null then #table({"col1"}, {}) else Json.Document(BadPattern)
in
ProcessedData
Why it's bad: You lose all diagnostic information and can't distinguish between network issues, authentication problems, or data format changes.
Better approach:
let
ApiCall = try Web.Contents("https://api.example.com"),
Result = if ApiCall[HasError] then
let
ErrorReason = ApiCall[Error][Reason],
ErrorMessage = ApiCall[Error][Message]
in
if ErrorReason = "DataSource.Error" then
// Log the specific network issue and try alternative
error "API unavailable: " & ErrorMessage & ". Check network connectivity."
else if Text.Contains(ErrorMessage, "401") then
error "Authentication failed. Verify API credentials."
else
error ApiCall[Error] // Propagate unexpected errors
else
Json.Document(ApiCall[Value])
in
Result
Problem: Placing try expressions at the wrong level in the evaluation chain:
// PROBLEMATIC
let
Source = Web.Contents("https://api.example.com"),
ParsedJson = Json.Document(Source),
ProcessedData = Table.FromRecords(ParsedJson),
// Error handling too late in the chain
Result = try Table.TransformColumnTypes(ProcessedData, {{"Amount", Currency.Type}})
otherwise ProcessedData
in
Result
Why it's problematic: If the API call fails, the error won't surface until the type transformation, and you'll lose context about what actually failed.
Better approach:
let
// Handle errors at appropriate boundaries
ApiCall = try Web.Contents("https://api.example.com"),
Result = if ApiCall[HasError] then
error "API call failed: " & ApiCall[Error][Message]
else
let
ParseResult = try Json.Document(ApiCall[Value]),
ProcessedData = if ParseResult[HasError] then
error "JSON parsing failed: " & ParseResult[Error][Message]
else
let
TableData = Table.FromRecords(ParseResult[Value]),
TypeConversion = try Table.TransformColumnTypes(TableData,
{{"Amount", Currency.Type}})
in
if TypeConversion[HasError] then
// Keep original data but log the conversion issue
TableData
else
TypeConversion[Value]
in
ProcessedData
in
Result
Problem: Adding error handling without considering evaluation overhead:
// PERFORMANCE PROBLEM
let
SlowPattern = Table.TransformColumns(largeTable,
List.Transform(Table.ColumnNames(largeTable), (colName) =>
{colName, (cellValue) => try Text.From(cellValue) otherwise "ERROR"}
))
in
SlowPattern
This pattern creates a try expression for every cell in every column, which can dramatically slow down large datasets.
Optimized approach:
let
// Handle errors at column level, not cell level
OptimizedPattern = List.Accumulate(Table.ColumnNames(largeTable), largeTable,
(currentTable, columnName) =>
let
ConversionResult = try Table.TransformColumns(currentTable,
{{columnName, Text.From, type nullable text}})
in
if ConversionResult[HasError] then
// Log the column-level error and use fallback
Table.ReplaceValue(currentTable, null, "ERROR",
Replacer.ReplaceValue, {columnName})
else
ConversionResult[Value]
)
in
OptimizedPattern
Problem: Error messages that don't provide enough information for debugging:
// UNHELPFUL ERROR HANDLING
let
ProcessFiles = (filePaths as list) =>
List.Transform(filePaths, (path) =>
try Excel.Workbook(File.Contents(path))
otherwise error "File processing failed"
)
in
ProcessFiles
Better approach:
let
ProcessFiles = (filePaths as list) =>
List.Transform(filePaths, (path) =>
let
FileResult = try File.Contents(path),
Result = if FileResult[HasError] then
error "Failed to access file '" & path & "': " & FileResult[Error][Message]
else
let
ExcelResult = try Excel.Workbook(FileResult[Value])
in
if ExcelResult[HasError] then
error "Failed to parse Excel file '" & path & "': " &
ExcelResult[Error][Message] &
". File may be corrupted or not a valid Excel format."
else
ExcelResult[Value]
in
Result
)
in
ProcessFiles
When error handling doesn't behave as expected, it's often due to misunderstanding M's lazy evaluation. Use this debugging pattern:
let
DebugEvaluation = (queryStep as any, stepName as text) =>
let
EvaluationResult = try queryStep,
LogEntry = [
StepName = stepName,
Timestamp = DateTime.LocalNow(),
Success = not EvaluationResult[HasError],
ErrorReason = if EvaluationResult[HasError] then EvaluationResult[Error][Reason] else null,
ErrorMessage = if EvaluationResult[HasError] then EvaluationResult[Error][Message] else null
],
// Force evaluation and return both result and debug info
Result = [
Value = if EvaluationResult[HasError] then null else EvaluationResult[Value],
Debug = LogEntry
]
in
Result
in
DebugEvaluation
Large error logs can consume significant memory. Implement log rotation:
let
RotateErrorLog = (errorLog as table, maxRows as number) as table =>
let
CurrentRowCount = Table.RowCount(errorLog),
RotatedLog = if CurrentRowCount > maxRows then
let
SortedLog = Table.Sort(errorLog, {{"Timestamp", Order.Descending}}),
TrimmedLog = Table.FirstN(SortedLog, maxRows)
in
TrimmedLog
else
errorLog
in
RotatedLog
in
RotateErrorLog
Mastering error handling in M transforms you from someone who writes queries that work with clean data to someone who builds robust data pipelines that handle the messy realities of production systems. The patterns we've covered—from basic try-otherwise constructs to sophisticated circuit breakers and graceful degradation—form the foundation of enterprise-grade Power Query solutions.
Key takeaways from this lesson:
Error handling is about evaluation boundaries: Understanding when and where M evaluates expressions is crucial for effective error handling strategy.
Context is everything: Generic error suppression helps no one. Always preserve error context and provide actionable diagnostic information.
Design for degradation: Build systems that can operate with partial data rather than failing completely when individual components fail.
Monitor and learn: Error logs aren't just for debugging—they're valuable data about the health and evolution of your data sources.
Performance matters: Error handling adds overhead. Design patterns that provide robustness without sacrificing query performance.
The advanced patterns we've covered—circuit breakers, adaptive retry logic, and comprehensive error logging—represent production-grade approaches that will serve you well as your Power Query solutions grow in complexity and criticality.
Next steps to deepen your expertise:
Implement monitoring dashboards: Create Power BI reports that visualize your error logs and quality metrics to identify patterns and trends in data source reliability.
Build reusable error handling libraries: Package the patterns from this lesson into custom M functions that you can reuse across multiple queries and share with your team.
Explore integration patterns: Learn how to coordinate error handling between Power Query and other parts of your data architecture, such as Azure Data Factory or custom applications.
Study advanced M language features: Dive deeper into M's type system, lazy evaluation model, and advanced function patterns that can make your error handling even more sophisticated.
Practice with real-world scenarios: Apply these patterns to your actual data sources and use cases. Each production system has unique failure modes that will teach you new aspects of robust error handling.
The difference between a data professional who can write M queries and one who can build reliable data systems lies largely in mastering these error handling patterns. As your data sources become more complex and your stakeholders more dependent on consistent data delivery, these skills become not just useful but essential.
Learning Path: Advanced M Language