Module 8

Advanced Techniques

Once you have mastered the fundamentals, these advanced techniques will help you handle the edge cases that separate good test suites from great ones.

Swiss Army knife with multiple tools representing advanced testing techniques
Advanced techniques are tools in your testing toolkit—use the right one for each job

Dear Marilyn: Our application runs on three different browsers, two operating systems, and has both English and Spanish versions. Do I really need to write separate tests for each combination?

— Overwhelmed in Orlando

Dear Overwhelmed: Absolutely not. What you need is not more tests, but smarter tests. ABT provides several techniques for handling variations without multiplying your test count. Let me introduce you to the Swiss Army knife of test automation.

Variations: One Test, Many Configurations

A variation is a configuration parameter that changes how a test runs without changing what it tests. Think of it as a dial you can turn to switch contexts.

Common Variation Types

Browser Variations

  • • Chrome, Firefox, Safari, Edge
  • • Mobile vs. Desktop
  • • Different screen resolutions

Environment Variations

  • • Dev, QA, Staging, Production
  • • Different database configurations
  • • Cloud regions (US, EU, APAC)

Localization Variations

  • • Language (EN, ES, FR, DE)
  • • Currency formats
  • • Date/time formats

User Variations

  • • Role-based (Admin, User, Guest)
  • • Permission levels
  • • Subscription tiers

How Variations Work

# Test: Login Verification

# Variations: Browser, Language

open_browser(type: [Browser])

set_language(locale: [Language])

navigate_to(page: "Login")

enter_credentials(user: "testuser", password: "secret")

click_login()

check_welcome_message(expected: [Welcome_Text])

BrowserLanguageWelcome_Text
Chromeen-US"Welcome back!"
Firefoxes-ES"¡Bienvenido!"
Safarifr-FR"Bienvenue!"

One test, three rows, nine executions (3 browsers × 3 languages). Without variations, you would need to write nine separate tests.

Regular Expressions: Pattern Matching

Dear Marilyn: My test keeps failing because the order number changes every time. How can I verify "Order #12345 created" when the number is always different?

— Pattern Seeker in Portland

Dear Pattern Seeker: You need to verify the pattern, not the exact value. Regular expressions let you say "I expect 'Order #' followed by some digits, followed by ' created'."

Common Regex Patterns for Testing

Order Numbers

Order #\d+ created

Matches: "Order #12345 created", "Order #1 created", "Order #999999 created"

Timestamps

\d{4}-\d{2}-\d{2}

Matches: "2024-01-15", "2025-12-31"

Email Addresses

[a-zA-Z0-9._%+-]+@[a-zA-Z0-9.-]+\.[a-zA-Z]{2,}

Matches: "[email protected]", "[email protected]"

Currency Values

\$[\d,]+\.\d{2}

Matches: "$1,234.56", "$0.99", "$1,000,000.00"

Using Regex in Checks

# Instead of exact match:

check_message(expected: "Order #12345 created")

# Use pattern match:

check_message(pattern: "Order #\\d+ created")

Graphics & Media Testing

Not everything can be verified by reading text. Sometimes you need to verify that an image looks correct, a chart displays properly, or a video plays.

Visual Verification Techniques

1
Screenshot Comparison

Capture a "golden" screenshot and compare future runs against it. Useful for catching unintended visual changes.

2
Region Verification

Check that a specific region of the screen matches expected content. More resilient to layout changes than full-page comparison.

3
OCR Verification

Extract text from images using Optical Character Recognition. Useful for verifying text in charts, graphs, or PDFs.

4
Media Playback

Verify that audio/video plays, reaches expected duration, and does not error out during playback.

Example: Chart Verification

# Verify sales chart displays correctly

navigate_to(page: "Sales Dashboard")

wait_for_chart_load(chart_id: "monthly_sales")

# Visual verification

check_picture(region: "sales_chart", baseline: "expected_chart.png", tolerance: 5%)

# Data verification via OCR

check_chart_label(chart_id: "monthly_sales", label: "January", value_pattern: "\\$[\\d,]+")

Check Picture: Deep Dive

The check_picture action is more sophisticated than a simple screenshot comparison. Understanding its options will help you create robust visual tests.

Absolute vs. Relative Checks

Absolute Checks

Baseline images are stored in a central "Picture Checks" folder. Use for images that should be consistent across all tests.

check_picture(baseline: "company_logo.png")
Relative Checks

Baseline images are stored within the test module. Use for context-specific images that vary by test.

check_picture(baseline: "./expected_state.png")

Tolerance Settings

Real-world images rarely match pixel-for-pixel. Tolerance settings let you define acceptable differences.

ParameterDescriptionExample
tolerancePercentage of pixels that can differtolerance: 5%
color_thresholdHow different a pixel color can be (0-255)color_threshold: 10
ignore_regionsAreas to exclude from comparisonignore: ["timestamp", "ads"]

Factors That Can "Spoil" Picture Checks

  • • Dynamic test data (timestamps, IDs)
  • • Environment differences (fonts, rendering)
  • • Random elements (ads, recommendations)
  • • Animation states (loading spinners)
  • • Anti-aliasing differences
  • • Screen resolution variations

Project Subscription: Reuse Across Projects

When multiple projects share common functionality, you do not want to duplicate test modules. Project Subscription lets you create a "library" of reusable test assets.

How Subscription Works

Core Library

login_actions
navigation_actions
common_checks

Project A

Subscribes to Core

Project B

Subscribes to Core

Project C

Subscribes to Core

Good Candidates for Sharing

  • • Authentication actions (login, logout, password reset)
  • • Navigation actions (menu, breadcrumb, search)
  • • Common UI checks (error messages, notifications)
  • • Data setup utilities (create test user, seed data)

Keep Project-Specific

  • • Business-specific workflows
  • • Custom UI components
  • • Project-specific data formats
  • • Unique validation rules

Test Suites: Organizing Execution

Dear Marilyn: We have hundreds of test modules. How do we decide which tests to run for a nightly build versus a full regression?

— Selecting in Seattle

Dear Selecting: You need Test Suites—logical groupings of test modules that can be executed together. There are two approaches to creating suites, and most teams use both.

1

Predefined Suites

Manually curated lists of test modules. You explicitly add or remove tests from the suite.

# Smoke Test Suite

• Login_Tests

• Dashboard_Load

• Critical_Workflow

• Logout_Tests

Best for: Smoke tests, critical path, release validation
2

Query-Based Suites

Dynamic selection based on criteria. Tests are included if they match the query conditions.

# All Invoice Tests

WHERE module_name LIKE "Invoice%"

AND priority = "High"

AND last_modified >= "2024-01-01"

Best for: Regression, feature-specific, recent changes

Common Suite Strategies

Suite TypeWhen to RunSelection MethodDuration
SmokeEvery commitPredefined (10-20 critical tests)5-15 minutes
NightlyDaily overnightQuery (all high + medium priority)2-4 hours
Full RegressionWeekly / Pre-releaseQuery (all active tests)8+ hours
Feature-SpecificAfter feature changesQuery (by module/tag)Variable

Module Summary

  • Variations let you run one test across multiple configurations (browsers, languages, environments).
  • Regular expressions enable pattern-based verification for dynamic content.
  • Graphics testing handles visual verification through screenshots, regions, and OCR. Use absolute checks for shared images and relative checks for context-specific ones.
  • Project subscription enables reuse of common test assets across multiple projects.
  • Test Suites organize execution: use predefined suites for critical paths and query-based suites for dynamic selection.