as

Settings
Sign out
Notifications
Alexa
Amazon Appstore
AWS
Documentation
Support
Contact Us
My Cases
Get Started
Design and Develop
Publish
Reference
Support

Measure App KPIs

The Vega App KPI Visualizer helps you monitor and optimize your app's performance by measuring key metrics that impact your user experience. This tool provides data about app launch times, memory usage, and user interface (UI) responsiveness. By regularly tracking these KPIs, you make sure that your app delivers a smooth, responsive experience that meets your users' expectations. Before you publish your app to the App Store, make sure to measure the performance of the release variant to verify that your users have a responsive experience.

This page explains how to:

  • Use the KPI Visualizer
  • Measure and understand your app's key performance metrics
  • Set up measurements
  • Interpret results

Prerequisites

Before you use the Vega App KPI Visualizer, make sure you:

  1. Install the Vega Performance API module:

    Copied to clipboard.

     npm install @amazon-devices/kepler-performance-api
    
  2. Read the following sections:

  3. To measure foreground memory, video fluidity, or UI fluidity, follow the instructions in Measure Fluidity and Foreground Memory.

  4. Choose your measurement method:

App KPI metrics and guidelines

The following table presents both main KPIs and their associated Micro KPIs. Main KPIs measure overall performance metrics, while Micro KPIs represent specific measurable components that contribute to a main KPI.

For example, Application JavaScript bundle load time is a Micro KPI that contributes to the overall time-to-first-drawn (TTFD) KPI. Empty Micro KPI cells indicate that no specific sub-components are currently being measured for that main KPI.

KPI Micro KPI Unit Launch scenario Description Guideline
Time-to-first-frame (TTFF)   Seconds (s) Cool start app launch Measures the time from app launch to first frame render. The operating system (OS) calculates TTFF in cool start without requiring app code markers. < 1.5 s.
TTFF   Seconds (s) Warm start app launch Measures the time an app takes to move from background to foreground. The OS calculates TTFF in warm start without requiring app code markers. < 0.5 s.
TTFD   Seconds (s) Cool start app launch Measures the time from launch to app ready for user interaction. You must determine when your app is fully drawn and ready for user interaction. For example, the app might require asynchronous loading of assets to complete before it's ready for user interaction. See Fully drawn marker. < 8.0 s.
TTFD Application JavaScript bundle load time Seconds (s) Cool start app launch Measures the time it takes for the JavaScript bundle to complete loading during app initialization. The measurement starts when bundle loading begins and ends when the bundle is fully loaded and ready for execution. N/A
TTFD Network calls time Seconds (s) Cool start app launch Measures the time it takes for all network calls to complete during app initialization. Due to the asynchronous nature of network calls, this measurement accounts for parallel requests and represents the total duration until all calls are resolved. N/A
TTFD   Seconds (s) Warm start app launch Measures the time it takes for an app to become fully interactive after moving from the background to the foreground state. Similar to TTFD cool start scenario, you must determine when your app is fully drawn and ready for user interaction. But, you must invoke the callback when your app is fully drawn and the app state changes to foreground. See Fully drawn marker. < 1.5 s.
Foreground Memory   MebiBytes (MiB) App in foreground Measures the app's Proportional Set Size (PSS) when active. PSS reflects the private memory and the proportional share of shared memory that the process effectively holds in RAM. The Vega App KPI Visualizer measures performance during launch, foreground transition, and memory-intensive actions. < 400 MiB.
Background Memory   MebiBytes (MiB) App in background Measures the app's PSS when it launches and moves to the background. This metric captures the app's memory footprint when it's inactive but ready for quick resumption. < 150 MiB.
Video Fluidity   Percent (%) Video playback Measures how smoothly the app streams video playback. Video fluidity represents the percentage of time the video plays at its intended frame rate. > 99%.
Time-to-first-frame video fluidity (TTFVF)   Seconds (s) Video playback Measures the time to first video frame from video play start. Make sure the last UI interaction in prep member of your test scenario is triggering video streaming. See Measure Fluidity and Foreground Memory for details. < 2.5 s.
3+ Video Consecutive Dropped Frames   Count Video playback Counts the instances where the app drops 3 or more video frames in a row during playback, causing noticeable interruptions in video streaming quality. N/A
5+ Video Consecutive Dropped Frames   Count Video playback Counts the instances where the app drops 5 or more consecutive frames during video playback, indicating significant playback disruptions. N/A
UI Fluidity   Percent (%) UI interaction (for example, vertical and horizontal scrolling) Measures the smoothness of UI display during intensive user interactions. UI fluidity represents the percentage of frames the app successfully renders during on-screen UI interactions. > 99%.
App Event Response Time - Focus   Milliseconds (ms) UI interaction (for example, vertical and horizontal scrolling) Measures the scheduling latency between the native UI thread and JavaScript thread for focus events (onFocus/onBlur). Scheduling delays over 200ms indicate JavaScript thread congestion, which affects the app's interaction responsiveness. < 200 ms.
3+ Consecutive Dropped Frames   Count UI interaction (for example, vertical and horizontal scrolling) Counts instances where the app drops 3 or more consecutive frames during scrolling interactions, causing noticeable stuttering in the user interface. N/A
5+ Consecutive Dropped Frames   Count UI interaction (for example, vertical and horizontal scrolling) Counts instances where the app drops 5 or more consecutive frames during scrolling interactions, causing significant stuttering in the user interface. N/A
5+ Consecutive Delayed Events - Focus   Count UI interaction (for example, vertical and horizontal scrolling) Counts instances where the app experiences 5 or more consecutive delayed focus events, causing significant delay in event processing and user interaction responsiveness. N/A

Launch scenarios

The KPI Visualizer measures two types of app launch scenarios to evaluate performance for TTFF and TTFD:

  • Cool start - When a user launches an app for the first time, and the system loads all resources and dependencies into memory.

  • Warm start - When a user moves an app from background (an inactive state) to foreground (an active state) with some resources and dependencies already in memory.

Fully drawn marker

A fully drawn marker signals when your app becomes interactive for users. The marker:

  • Indicates when your app completes loading its essential components.
  • Marks when users can start interacting with your app.
  • Helps measure TTFD performance.

To implement fully drawn marker in your app:

  1. Add the useReportFullyDrawn hook to your app.
  2. Place markers at these key points:
    • Cool start - After loading initial data and rendering the main screen.
    • Warm start - When your app becomes responsive after foregrounding.

The following code sample shows how to add fully drawn marker:

Copied to clipboard.

import { useReportFullyDrawn } from '@amazon-devices/kepler-performance-api';
import React, { useCallback, useEffect, useState } from 'react';
import { useKeplerAppStateManager } from '@amazon-devices/react-native-kepler';
...
...
export const App = () => {
  const reportFullyDrawnCallback = useReportFullyDrawn();
  const KeplerAppStateManager = useKeplerAppStateManager();
  const [appState, setAppState] = useState(KeplerAppStateManager.getCurrentState());
	 
  // Using a useEffect Hook to have the fully drawn reporting performed 
  // post first render after cool launch.

  // If the app performs additional asynchronous processing
  // that needs to be completed before it is fully drawn, pass the
  // completion state in the array of dependencies and check the state
  // inside the hook.
  useEffect(() => {
    reportFullyDrawnCallback();
  }, [reportFullyDrawnCallback]);

  // Emit fully drawn marker on the first draw after warm launch.
  const handleAppStateChange = useCallback((stateChange: any) => {
    if (

	 appState.match(/^(inactive|background)$/) &&
	 stateChange === 'active'
	) {
	  reportFullyDrawnCallback();
	}
	if (stateChange.match(/^(inactive|background|active|unknown)$/)) {
	  setAppState(stateChange)
	}
  }, [appState, reportFullyDrawnCallback]);
	 
  useEffect(() => {
    const changeSubscription = keplerAppStateManager.addAppStateListener(
	  'change',
	  handleAppStateChange,
    );
    return () => {
	  changeSubscription.remove();
	}
  }, [handleAppStateChange]);
  ...
  ...
  return (
    <View style={styles.container}>
      ...
      ...
    </View>
  );
};

Measure KPIs in VS Code

  1. Open the command palette in VS Code:

    • For Mac: shift + command ⌘ + p
    • For Linux: Ctrl + Shift + P
    Enter Vega: Launch App KPI Visualizer in the command palette
    Launch App KPI Visualizer from the command palette
  2. Enter Kepler: Launch App KPI Visualizer and press Enter.

    Launch App KPI Visualizer from the App Performance Tools section
    Select App KPI Visualizer from App Performance Tools

    You can also click the App KPI Visualizer from the Kepler Studio panel.

  3. Select a use case:

    Screen shows various use cases to run App KPI visualizer with
    Select a use case

    To measure foreground memory and video streaming fluidity, create a custom test scenario that match how your users interacts with your app.

    For example:

    • For memory testing - Include video playback or image loading.
    • For streaming - Include typical video watching patterns.

    For UI fluidity measurement, develop a test scenario that replicates the most common user interactions in your app. Without a custom scenario, the default test scrolls vertically and horizontally through your app's front page, which might not accurately represent your users' behavior.

    For guidance on generating test scenarios, see Measure Your App’s UI Fluidity.

    After you select your use cases, the visualizer starts and performs three iterations for the following KPIs:

    • Cool start KPIs

      • Launch the test app on the device.
      • Wait for 10 s for the test app to completely load.
      • Close the test app.
      • Process the KPIs.
    • Warm start KPIs

      • Launch the test app on the device.
      • Launch another app on the device, moving the test app to the background.
      • Launch the test app, moving it to foreground.
      • Wait for 15 s for the test app to completely load.
      • Close the test app.
      • Process the KPIs.
    • Foreground memory KPIs

      • Launch the test app on the device.
      • Perform the specified steps in test scenario, and capture the KPIs for analysis and reporting.
      • Close the test app.
      • Process the KPIs.
    • Background memory KPIs

      • Launch the test app on the device.
      • Launch another app on device, moving the test app to the background.
      • Wait for 15 s to collect the KPIs of the test app.
      • Close the test app.
      • Process the KPIs.
    • UI fluidity KPIs

      • Launch the test app on the device.
      • Choose your test method:
        • Custom test - Use your own UI interaction scenarios
        • Default test - Uses standard scrolling patterns
          • 2 sets of horizontal scrolls (5 left, 5 right)
          • 2 sets of vertical scrolls (5 down, 5 up)
          • 900 ms between actions
      • Click the Cancel button or Esc key to end the test.
      Screen shows Yes option to use custom case scenario for app fluidity test
      Select Yes to use custom case scenario
      • Capture the KPIs for analysis and reporting.
      • Close the test app.
      • Process the KPIs.
    • Video playback fluidity KPIs

      • Launch the test app on the device.
      • Perform the steps in video playback test scenario, and capture the KPIs for analysis and reporting.
      • Close the test app.
      • Process the KPIs.

    To stop the visualization process, click Cancel.

    Screen indicates KPI Visualizer is running
    KPI Visualizer
  4. Choose whether to ignore trace loss during the test.

    Screen provides option to ignore trace loss and continue to generate KPI report
    Option to ignore trace loss

    When your app has performance issues, significant trace loss can occur. The Vega App KPI Visualizer won't load traces and shows no KPIs (appear as N/A) in the report.

    You can choose to ignore trace losses when generating KPI reports to view the metrics. However, ignoring trace losses produces KPI values that appear better than actual performance.

  5. View KPI scores in the visualizer window.

    The window shows the P90 (90th percentile) value calculated from three test iterations.

  6. To assess the results, see Understand the performance report.

Measure KPIs with CLI commands

  1. At the command prompt, run the kepler exec perf doctor command to check if your host and target devices are ready.

    kepler exec perf doctor [--app-name]
    

    Replace --app-name with the package ID from the manifest.toml file.

    Example:

    kepler exec perf doctor --app-name=com.amazondeveloper.keplervideoapp.main
    
    Firmware: Stable build (<device-user> OS 1.1 (TV Mainline/1387)).
    ✅ Network: Connected
    ✅ Free disk space: 43.31 GB available.
    ✅ Appium: Installed (version 2.2.2)
    ✅ Appium driver for Vega: Installed - kepler@3.18.0 [installed (linked from /Users/.../AppiumVegaDriver)]
    ✅ com.amazondeveloper.keplervideoapp.main is installed.
    
    Collecting CPU (4 cores) and Memory data...
    ❌ Max User CPU usage at 241.20%. Check for unwanted processes.
    ❌ Max System CPU usage at 222.80%. Check for unwanted processes.
    ✅ Average User CPU usage at 166.16%
    ✅ Average System CPU usage at 101.84%
    ✅ Min memory available at 30.80%
    ✅ Average memory available at 32.16%
    
    ! Device: Not ready for performance testing. Please review lines with X (error) and ! (warnings) symbols.
    
  2. Start the Vega App KPI Visualizer:

    kpi-visualizer --app-name=[app-name]
    

    For kpi-visualizer, app-name is the only required parameter. Replace [app-name] with the id of default interactive component from the manifest.toml file.

  3. (Optional) Specify the default number of iterations. Default iterations: 3 or 30 if –certification is used.

    --iteration [number] 
    
  4. (Optional) Specify which KPI to measure:

    --kpi [kpi name]
    

    Without this parameter, the visualizer measures cool start TTFF and TTFD KPIs by default.

  5. View all available options:

    Copied to clipboard.

     kpi-visualizer --help
    

    Example:

     kepler exec perf kpi-visualizer --help
    
     NAME:
    
     KPI Visualizer Tool
    
     DESCRIPTION:
    
     Measures key performance metrics like app launch times, memory usage, and UI responsiveness to optimize your app's user experience.
    
     SYNOPSIS:
    
     kepler exec perf kpi-visualizer [parameters]
    
     Use 'kepler exec perf command --help' to retrieve information for a specific command.
    
     PARAMETERS:
    
     --iterations ITERATIONS
           Sets the number of times to run the test. This overrides .conf setting.
     --record-cpu-profiling
           Enables CPU profile recording during test execution.
     --sourcemap-file-path SOURCEMAP_FILE_PATH
           Specifies the path to the source map file.
     --grpc-port port
           Specifies the port number for the gRPC server.
     --certification
           Runs tests in certification mode using 30 iterations and 90th percentile aggregation.
     --expected-video-fps EXPECTED_VIDEO_FPS
           Specifies the target FPS for the app under test.
     --kpi KPI       
           (Optional) Specifies the performance metric to measure.
    
           Supported scenarios:
           1. cool-start-latency -  Measures app launch latency from cold start. Includes both TTFF and TTFD by default.
           2. ui-fluidity - Measures smoothness of UI interactions.
           3. warm-start-latency - Measures first frame display latency when resuming an app from background to foreground.
           4. foreground-memory -  Measures app memory usage while in foreground state.
           5. background-memory -  Measures app memory usage while in background state.
           6. video-fluidity - Measures smoothness of video playback. Requires a test scenario (--test-scenario) that initiates video playback.
    
     --test-scenario TEST_SCENARIO
           Specifies the Python script that defines the UI test scenario. Use the generate-test-template command to create a test scenario template.
    
     --monitor-processes MONITOR_PROCESSES [MONITOR_PROCESSES ...]
           Specifies additional services to monitor during testing.
    
           Example:
           --monitor-processes webview.renderer_service
    
     --ignore-trace-loss
           Skips trace data loss verification during test.
    
     --help  
           Shows this help message.
    
     --version, -v
           Shows current version of this perf tool.
     %
    

After the visualization completes, you see a report summary in stdout.

Example report:

Performance Analyzer KPI Report

Firmware version: Device OS 1.1 (VegaMainlineTvIntegration/XXXX), serial number: XXXXXXXXXXXXXXXX
Date: 01/09/2025, test: app-background, iterations requested: 3, iterations completed: 3, duration: 15 seconds

Memory Statistics
                            | n   | min    | mean    | max    | stdev  | ci (+/-)
App Resident Memory (kB)    | 104 | 131044 | 132139.0| 133136 | 865.2  | 140.8 √

When KPIs show -1, it indicates unavailable data for mean, min, and max values. For TTFD, this might occur when the app doesn't call Report Fully Drawn API.

When KPIs don't appear, there might be a significant trace loss during data collection. KPI Visualizer won't load traces and shows no KPIs. To view KPIs despite trace loss, run:

Copied to clipboard.

--ignore-trace-loss <true/false>

Understand the performance report

The Vega App KPI Visualizer shows a performance report, which includes:

  1. Test information

    • Date - The date when the system captures the KPI data.
    • Device Serial ID - The unique identifier of the device running the app.
    • App - The name of the app for which the KPI data appears.
    • Number of iterations - The number of times the KPI measurement process runs.
  2. KPI Name - The name of the KPI with its unit.

  3. Test Name - The name of the test or the completed use case.

  4. KPI Health - A color-coded system representing the health of each KPI:

    • 🟢 (Green) - Meets the recommended guideline.
    • 🟡 (Yellow)- Within 10% of the guideline.
    • 🔴 (Red) - Exceeds the guideline by more than 10%.
  5. KPI Score - The numeric value of the KPI, which appears in the same unit as specified in the KPI name. If the visualizer can’t calculate the KPIs, it displays "N/A" for the KPI score and list the KPI Health as "Unknown."

  6. Guideline - The recommended value or range for the KPI score based on industry standards or performance targets.

Example results

The following images show sample results for each use case. Pay attention to the KPI Health indicators and scores relative to guidelines.

Use case: Cool start TTFF and TTFD

The following result shows launch performance metrics. Green indicators show the app meets launch time guidelines.

Screen shows result for cool start TTFF and TTFD performance metrics
Performance report for cool start TTFF and TTFD

Use case: Warm start TTFF and TTFD

The following results shows how your app resumes from background state. Compare TTFF and TTFD times to evaluate optimization needs.

Screen shows result for warm start TTFF and TTFD performance metrics
Performance report for warm start TTFF and TTFD

Use case: Foreground memory

Shows your app's memory usage during active use. Monitor this metric to prevent performance issues from excessive memory consumption.

Screen shows result for foreground memory performance metrics
Performance report for foreground memory

Use case: Background memory

Shows your app's memory footprint while in background state. Important for understanding your app's impact on system resources when inactive.

Screen shows result for background memory performance metrics
Performance report for background memory

Use case: UI fluidity

Shows how smoothly your app handles user interactions. The percentage indicates frames successfully rendered during scrolling and navigation.

Screen shows result for UI fluidity performance metrics
Performance report for UI fluidity

Use case: Video playback fluidity

Shows how smoothly your app plays video content. The percentage represents successful frame delivery at the intended playback rate.

Screen shows result for video playback fluidity performance metrics
Performance report for video playback fluidity

Open a KPI report

After the Vega App KPI Visualizer completes the test scenarios, it generates the following report files:

  • aggregated-kpi-report-[timestamp].json - Consolidates KPI data from all test scenarios.

  • [test-name]-kpi-report-[timestamp].json - Creates one file for each individual test scenario. The [test-name] identifies the specific scenario.

The [timestamp] tells you when the Vega App KPI Visualizer generated the report.

To open a KPI report, open a previous recording. Then use one of these methods:

Option 1 (Preferred): Use Quick open This method provides access to your recordings through VS Code's interface or CLI.

From VS Code:

  1. Find the KPI report file (example: app-launch-kpi-report-[timestamp].json) using VS Code Quick Open or in the project's generated directory.
  2. Click the file once to preview or twice to open in active mode.

From CLI:

  1. Open a terminal window and enter:
   code <<path-to-recording-file>>

If your terminal doesn't recognize the code command:

  1. Open VS Code.
  2. Open the command palette:

    • Mac: Cmd+Shift+P
    • Linux: Ctrl+Shift+P
  3. Run "Shell Command: Install code command in PATH".
  4. Retry the command.

Option 2: Use VS Code command palette This method provides access to your recording using VS Code's built-in command interface or if Quick Open isn't available.

  1. Open VS Code.
  2. Open the command palette:

    • Mac: Cmd+Shift+P
    • Linux: Ctrl+Shift+P
  3. Enter Kepler: Open Recording View.
  4. Select the file you want to open, such as app-launch-kpi-report-[timestamp].json.

Last updated: Sep 30, 2025