Skip to main content

Screen-State Annotations

What Are Screens and States?

Before diving into screen-state annotations, it's helpful to understand what screens and states are:

  • Screen: A distinct page or view in your application (e.g., "Login", "Dashboard", "Cart", "Checkout")
  • State: A specific condition or configuration of that screen (e.g., "Empty form", "Filled form", "With items", "Error message displayed")

A screen can have multiple states. For example, the "Login" screen might have states like "Empty form", "Filled form", or "Error displayed". The "Cart" screen might have states like "Empty cart" or "Cart with items".

Screen-state annotations help identify exactly where in your application a bug occurs, providing precise context for debugging and tracking issues.

Overview

Screen-state annotations are special comments in SmartTests that are placed after a step to identify which screen and state the test is in after that step has been executed. These annotations help to tag bugs captured by ExploreChimp to the same screen-state everytime the agent is run on the test.

How Screen-State Annotation Comments Work in Code

Screen-state annotations are placed after a step and specify the screen-state after that step has been executed. They use a simple comment format:

await page.goto('https://example.com/login');
// @screen: Login @state: Empty form

await page.fill('#username', 'testuser');
await page.fill('#password', 'password');
// @screen: Login @state: Filled form

await page.click('button[type="submit"]');
// @screen: Dashboard @state: Default

Format and Syntax

The annotation format is a single line combining both screen and state:

  • // @screen: <ScreenName> @state: <StateName> - Single line comment with both screen and state

Examples:

  • // @screen: Login @state: Empty form
  • // @screen: Dashboard @state: Default
  • // @screen: Cart @state: With items

Important: The annotation describes the screen-state after the preceding step(s) have been executed.

Using Annotations During Explorations

If Annotations Are Present

When screen-state annotation comments are present in your SmartTest code:

  • They are used for tagging bugs at that specific screen-state during explorations
  • Bugs found during that portion of the test will be associated with the annotated screen-state
  • This provides precise context about where issues occur

If Annotations Are Not Present

When screen-state annotations are not present:

  • The agent deduces screen states during exploration by:
    • Looking at the screen: Analyzing the current visual state, DOM structure, and URL
    • Reviewing the journey: Considering the steps taken so far in the test journey
    • Using known vocabulary: Referencing the vocabulary of known screen-states from your Atlas SiteMap to ensure no duplicate names for the same screen
  • After exploration completes, the agent updates the script with annotation comments for the detected screen-states
  • This allows you to benefit from screen-state tagging even if you haven't manually annotated your tests, while maintaining consistency with your existing screen-state naming

Best Practices

Getting Started with Annotations

  1. Run once to auto-annotate: Run your test once with "Update Script Annotations" checked. This will automatically add screen-state annotations based on the agent's detection
  2. Manually refine to fit your mental model: After auto-annotation, manually update the annotations to match how you think about your system's screens and states

Controlling Granularity and State Definition

Annotations allow you to control the granularity and how states are defined based on what matters to your testing and bug tracking needs. Different teams may define states differently based on their mental model:

Example: Shopping Cart States

You might define cart states based on:

  • Content-based: Empty cart vs Cart with items
  • Stock status: Cart with in-stock items vs Cart with out-of-stock items

The same screen can have different state definitions depending on what's important for your testing context. Annotations give you the flexibility to tag bugs at the level of detail that makes sense for your application.

Examples

Authentication Example

await page.goto('https://app.example.com/login');
// @screen: Login @state: Empty

await page.fill('#email', 'user@example.com');
await page.fill('#password', 'password123');
// @screen: Login @state: Filled

await page.click('button[type="submit"]');
await page.waitForURL('**/dashboard');
// @screen: Dashboard @state: Authenticated

Screen-state annotations are a powerful way to ensure bugs are properly contextualized and that your Atlas SiteMap accurately reflects your application's structure.