Azure AI Anomaly Detector for Java: Setup, Usage & Best Practices
Azure AI Anomaly Detector provides machine learning-powered anomaly detection for time-series data through a Java SDK. This skill enables you to identify unusual patterns, spikes, dips, and trend changes in both single-variable (univariate) and multi-variable (multivariate) datasets without building custom ML models or managing infrastructure.
What This Skill Does
This SDK wraps Azure's Anomaly Detector service, offering two distinct detection modes: univariate for analyzing single time series, and multivariate for detecting anomalies across hundreds of correlated signals. Univariate detection works with simple datasets—website traffic, sensor readings, sales numbers—and identifies points that deviate from expected patterns. It supports batch analysis of entire time series, streaming detection on the latest data point, and change point detection to find trend shifts.
Multivariate detection handles complex scenarios with up to 300+ correlated variables. It uses a Graph Attention Network to model inter-signal relationships, learning normal patterns from training data, then identifying anomalies that violate those correlations during inference. When an anomaly occurs, the model ranks contributing variables by importance, helping you understand root causes.
The workflow differs between modes. Univariate is synchronous: send data, get results immediately. Multivariate requires three steps: train a model on historical data stored in Azure Blob Storage, run batch or streaming inference on new data, and retrieve results with ranked contributor analysis. Both modes return confidence scores, expected values, and anomaly flags, enabling automated alerting, dashboarding, or human review.
Getting Started
Add the SDK to your Maven pom.xml:
<dependency>
<groupId>com.azure</groupId>
<artifactId>azure-ai-anomalydetector</artifactId>
<version>3.0.0-beta.6</version>
</dependency>
Configure environment variables:
export AZURE_ANOMALY_DETECTOR_ENDPOINT=https://<resource>.cognitiveservices.azure.com/
export AZURE_ANOMALY_DETECTOR_API_KEY=<your-api-key>
Create univariate and multivariate clients:
import com.azure.ai.anomalydetector.AnomalyDetectorClientBuilder;
import com.azure.ai.anomalydetector.MultivariateClient;
import com.azure.ai.anomalydetector.UnivariateClient;
import com.azure.core.credential.AzureKeyCredential;
String endpoint = System.getenv("AZURE_ANOMALY_DETECTOR_ENDPOINT");
String key = System.getenv("AZURE_ANOMALY_DETECTOR_API_KEY");
UnivariateClient univariateClient = new AnomalyDetectorClientBuilder()
.credential(new AzureKeyCredential(key))
.endpoint(endpoint)
.buildUnivariateClient();
MultivariateClient multivariateClient = new AnomalyDetectorClientBuilder()
.credential(new AzureKeyCredential(key))
.endpoint(endpoint)
.buildMultivariateClient();
For production, use DefaultAzureCredential instead of API keys to support managed identities.
Key Features
Univariate Batch Detection: Analyze entire time series at once. Identifies all anomalies in historical data, useful for forensic analysis or quality assurance.
Univariate Streaming Detection: Detect anomalies in real-time on the latest data point. Integrates with monitoring systems to alert on current conditions without reprocessing history.
Change Point Detection: Identify trend changes—when a stable pattern shifts to a new normal. Useful for detecting seasonality breaks, regime changes, or system upgrades.
Multivariate Model Training: Train models on up to 300 correlated signals. The service learns normal correlation patterns from historical data stored in Azure Blob Storage.
Multivariate Inference: Run batch or streaming inference against trained models. Get anomaly flags, severity scores, and ranked lists of contributing variables.
Contributor Analysis: When multivariate models detect anomalies, they rank variables by contribution score. This helps root cause analysis—which sensor caused the failure, which metric spiked abnormally.
Usage Examples
Univariate Batch Detection: Analyze an entire time series:
import com.azure.ai.anomalydetector.models.*;
import java.time.OffsetDateTime;
import java.util.List;
List<TimeSeriesPoint> series = List.of(
new TimeSeriesPoint(OffsetDateTime.parse("2023-01-01T00:00:00Z"), 1.0),
new TimeSeriesPoint(OffsetDateTime.parse("2023-01-02T00:00:00Z"), 2.5),
new TimeSeriesPoint(OffsetDateTime.parse("2023-01-03T00:00:00Z"), 1.8),
// ... minimum 12 points required
);
UnivariateDetectionOptions options = new UnivariateDetectionOptions(series)
.setGranularity(TimeGranularity.DAILY)
.setSensitivity(95);
UnivariateEntireDetectionResult result = univariateClient.detectUnivariateEntireSeries(options);
for (int i = 0; i < result.getIsAnomaly().size(); i++) {
if (result.getIsAnomaly().get(i)) {
System.out.printf("Anomaly at index %d: value=%.2f expected=%.2f%n",
i, series.get(i).getValue(), result.getExpectedValues().get(i));
}
}
Univariate Streaming Detection: Check the latest data point:
UnivariateLastDetectionResult lastResult = univariateClient.detectUnivariateLastPoint(options);
if (lastResult.isAnomaly()) {
System.out.printf("Latest point is anomaly! Expected: %.2f, Got: %.2f%n",
lastResult.getExpectedValue(),
series.get(series.size() - 1).getValue());
}
Change Point Detection: Find trend shifts:
UnivariateChangePointDetectionOptions changeOptions =
new UnivariateChangePointDetectionOptions(series, TimeGranularity.DAILY);
UnivariateChangePointDetectionResult changeResult =
univariateClient.detectUnivariateChangePoint(changeOptions);
for (int i = 0; i < changeResult.getIsChangePoint().size(); i++) {
if (changeResult.getIsChangePoint().get(i)) {
System.out.printf("Change point at index %d (confidence: %.2f)%n",
i, changeResult.getConfidenceScores().get(i));
}
}
Multivariate Model Training: Train on correlated signals in Blob Storage:
ModelInfo modelInfo = new ModelInfo()
.setDataSource("https://storage.blob.core.windows.net/container/data.zip?sasToken")
.setStartTime(OffsetDateTime.parse("2023-01-01T00:00:00Z"))
.setEndTime(OffsetDateTime.parse("2023-06-01T00:00:00Z"))
.setSlidingWindow(200)
.setDisplayName("Production Sensors");
AnomalyDetectionModel trainedModel = multivariateClient.trainMultivariateModel(modelInfo);
String modelId = trainedModel.getModelId();
System.out.println("Model trained: " + modelId);
Multivariate Inference with Contributors: Detect anomalies and identify root causes:
MultivariateBatchDetectionOptions detectionOptions = new MultivariateBatchDetectionOptions()
.setDataSource("https://storage.blob.core.windows.net/container/inference.zip?sasToken")
.setStartTime(OffsetDateTime.parse("2023-07-01T00:00:00Z"))
.setEndTime(OffsetDateTime.parse("2023-07-31T00:00:00Z"))
.setTopContributorCount(10);
MultivariateDetectionResult detectionResult =
multivariateClient.detectMultivariateBatchAnomaly(modelId, detectionOptions);
MultivariateDetectionResult result = multivariateClient.getBatchDetectionResult(detectionResult.getResultId());
for (AnomalyState state : result.getResults()) {
if (state.getValue().isAnomaly()) {
System.out.printf("Anomaly at %s (severity: %.2f)%n",
state.getTimestamp(), state.getValue().getSeverity());
// Print top contributors
for (AnomalyContributor contributor : state.getValue().getInterpretation()) {
System.out.printf(" - %s: %.2f%n",
contributor.getVariable(), contributor.getContributionScore());
}
}
}
Best Practices
Provide Sufficient Data: Univariate requires at least 12 points; more is better. Multivariate training needs hundreds to thousands of points for accurate correlation modeling.
Match Granularity to Reality: Set TimeGranularity to match your actual data frequency (hourly, daily, etc.). Mismatched granularity degrades accuracy.
Tune Sensitivity: Higher sensitivity values (0-99) detect more anomalies but increase false positives. Start at 95, adjust based on results.
Choose Sliding Window Carefully: For multivariate models, set slidingWindow between 200-1000 based on pattern complexity. Longer windows capture more context but require more training data.
Handle Errors Gracefully: Always catch HttpResponseException. Network issues, rate limits, and invalid data cause exceptions that should trigger retries or fallback logic.
Store Model IDs: Multivariate models persist on Azure. Store model IDs in your database to avoid retraining for each inference run.
Clean Up Models: Delete unused models to avoid quota consumption and unnecessary charges.
When to Use / When NOT to Use
Use this skill when:
- You need anomaly detection without building custom ML models
- You're monitoring time-series data (metrics, sensors, logs)
- You have correlated signals requiring multivariate analysis
- You need real-time streaming detection
- You want to identify trend changes or seasonality breaks
- You need ranked contributor analysis for root cause investigation
- You're building monitoring, alerting, or quality assurance systems
Avoid this skill when:
- Your data isn't time-series (use other classification methods)
- You have fewer than 12 univariate data points
- You need anomaly detection on images, text, or non-numeric data
- You require custom model architectures or training procedures
- You're working in Python or .NET (use language-specific SDKs)
- You need sub-second inference latency (API calls add overhead)
Related Skills
- azure-ai-textanalytics-py: Text analytics and sentiment analysis
- azure-ai-ml-py: Custom ML model training and deployment
- azure-ai-vision-imageanalysis-java: Image analysis and computer vision
Source
Maintained by Microsoft. View on GitHub