Skip to main content
Documentation|Failure Analysis

Last updated on : 21 Jan 2025

AI Failure Analysis

Overview

AI Failure Analysis provides intelligent insights at both session and build levels to help teams quickly identify, prioritize, and resolve test failures and performance issues. Once Session-Level AI Analysis is completed, Build-Level Analysis can be enabled to aggregate insights across all analyzed sessions within a build.

danger

Important - To View Build-Level Analysis, the Session-level AI analysis must be completed first

Important Note: To View Build-Level Analysis, the Session-level AI analysis must be completed first

Prerequisites:

  • Session-level AI analysis must be completed first

Steps

Step 1: To Enable Session-Level AI Analysis

  • Navigate to the Sessions section below
  • Click on any session you want to analyze
  • View/enable AI analysis for that session

Step 2: To View Build-Level Insights

  • Navigate to related  Build
  • Enable the toggle
  • Data will automatically aggregate across all analyzed sessions
  • Build-level insights appear in the sections above (Error Trends, Build Performance, Device Test Results)

1. Error Trends

Error Trends identifies the most common error patterns across all test sessions in your build, helping you prioritize fixes based on frequency and impact.

How it Works:

  • Analyzes error logs from all sessions
  • Groups similar errors together
  • Ranks errors by occurrence frequency
  • Shows affected session count for each error

Key Benefits:

  • Quick Identification: See top issues at a glance
  • Prioritization: Focus on errors affecting multiple sessions
  • Pattern Recognition: Identify recurring problems

Understanding the Display:

  • Error Name: Type of error detected (e.g., ELEMENT NOT FOUND)
  • Session Count: Number of sessions affected

Example Use Case:

If "ELEMENT NOT FOUND" appears in 15 sessions, this indicates a UI locator issue that needs immediate attention across your test suite.

2. Build Performance

What is Build Performance?

Build Performance highlights recurring performance issues detected across analyzed sessions, helping you identify speed bottlenecks and UI responsiveness problems.

How it Works:

  • Monitors execution speed and response times
  • Identifies slow operations and delays
  • Detects UI flakiness and navigation issues
  • Aggregates performance flags from all sessions

Key Benefits:

  • Performance Monitoring: Track build-level performance trends
  • Bottleneck Detection: Identify slow operations
  • UX Issues: Spot flaky UI elements early
  • Optimization Opportunities: Find areas for improvement

Common Performance Flags:

  • slow_page_load: Pages taking longer than expected to load
  • flaky_ui_elements: UI elements behaving inconsistently
  • unused_navigation: Navigation steps that could be optimized
  • slow_api_responses: API calls exceeding response time thresholds

Example Use Case:

If "slow_api_responses" appears repeatedly, investigate API endpoints for optimization opportunities or increase timeout thresholds if appropriate.

3. Device Test Results/Device Analysed

What is Device Test Results?

Device Test Results shows pass/fail status for each device tested in your analyzed sessions, helping you identify device-specific issues and ensure cross-device compatibility.

How it Works:

  • Collects test results from all devices
  • Separates passed and failed devices
  • ce failed device

Key Benefits:

  • Device Coverage: See which devices were tested
  • Compatibility Issues: Identify device-specific failures
  • Targeted Debugging: Focus on problematic devices

Example Use Case:

If "GOOGLE Pixel4XL" shows 3 failures  in 3 session while other devices pass, investigate Android version compatibility, screen resolution issues, or device-specific APIs.

Did this page help you?