Module 8 of 10
Advanced Techniques
Once you have mastered the fundamentals, these advanced techniques will help you handle the edge cases that separate good test suites from great ones.

Dear Marilyn: Our application runs on three different browsers, two operating systems, and has both English and Spanish versions. Do I really need to write separate tests for each combination?
— Overwhelmed in Orlando
Dear Overwhelmed: Absolutely not. What you need is not more tests, but smarter tests. ABT provides several techniques for handling variations without multiplying your test count. Let me introduce you to the Swiss Army knife of test automation.
Variations: One Test, Many Configurations
A variation is a configuration parameter that changes how a test runs without changing what it tests. Think of it as a dial you can turn to switch contexts.
Common Variation Types
Browser Variations
- • Chrome, Firefox, Safari, Edge
- • Mobile vs. Desktop
- • Different screen resolutions
Environment Variations
- • Dev, QA, Staging, Production
- • Different database configurations
- • Cloud regions (US, EU, APAC)
Localization Variations
- • Language (EN, ES, FR, DE)
- • Currency formats
- • Date/time formats
User Variations
- • Role-based (Admin, User, Guest)
- • Permission levels
- • Subscription tiers
How Variations Work
# Test: Login Verification
# Variations: Browser, Language
open_browser(type: [Browser])
set_language(locale: [Language])
navigate_to(page: "Login")
enter_credentials(user: "testuser", password: "secret")
click_login()
check_welcome_message(expected: [Welcome_Text])
| Browser | Language | Welcome_Text |
|---|---|---|
| Chrome | en-US | "Welcome back!" |
| Firefox | es-ES | "¡Bienvenido!" |
| Safari | fr-FR | "Bienvenue!" |
One test, three rows, nine executions (3 browsers × 3 languages). Without variations, you would need to write nine separate tests.
Regular Expressions: Pattern Matching
Dear Marilyn: My test keeps failing because the order number changes every time. How can I verify "Order #12345 created" when the number is always different?
— Pattern Seeker in Portland
Dear Pattern Seeker: You need to verify the pattern, not the exact value. Regular expressions let you say "I expect 'Order #' followed by some digits, followed by ' created'."
Common Regex Patterns for Testing
Order Numbers
Order #\d+ createdMatches: "Order #12345 created", "Order #1 created", "Order #999999 created"
Timestamps
\d{4}-\d{2}-\d{2}Matches: "2024-01-15", "2025-12-31"
Email Addresses
[a-zA-Z0-9._%+-]+@[a-zA-Z0-9.-]+\.[a-zA-Z]{2,}Matches: "[email protected]", "[email protected]"
Currency Values
\$[\d,]+\.\d{2}Matches: "$1,234.56", "$0.99", "$1,000,000.00"
Using Regex in Checks
# Instead of exact match:
check_message(expected: "Order #12345 created")
# Use pattern match:
check_message(pattern: "Order #\\d+ created")
Graphics & Media Testing
Not everything can be verified by reading text. Sometimes you need to verify that an image looks correct, a chart displays properly, or a video plays.
Visual Verification Techniques
1Screenshot Comparison
Capture a "golden" screenshot and compare future runs against it. Useful for catching unintended visual changes.
2Region Verification
Check that a specific region of the screen matches expected content. More resilient to layout changes than full-page comparison.
3OCR Verification
Extract text from images using Optical Character Recognition. Useful for verifying text in charts, graphs, or PDFs.
4Media Playback
Verify that audio/video plays, reaches expected duration, and does not error out during playback.
Example: Chart Verification
# Verify sales chart displays correctly
navigate_to(page: "Sales Dashboard")
wait_for_chart_load(chart_id: "monthly_sales")
# Visual verification
check_picture(region: "sales_chart", baseline: "expected_chart.png", tolerance: 5%)
# Data verification via OCR
check_chart_label(chart_id: "monthly_sales", label: "January", value_pattern: "\\$[\\d,]+")
Check Picture: Deep Dive
The check_picture action is more sophisticated than a simple screenshot comparison. Understanding its options will help you create robust visual tests.
Absolute vs. Relative Checks
Absolute Checks
Baseline images are stored in a central "Picture Checks" folder. Use for images that should be consistent across all tests.
Relative Checks
Baseline images are stored within the test module. Use for context-specific images that vary by test.
Tolerance Settings
Real-world images rarely match pixel-for-pixel. Tolerance settings let you define acceptable differences.
| Parameter | Description | Example |
|---|---|---|
| tolerance | Percentage of pixels that can differ | tolerance: 5% |
| color_threshold | How different a pixel color can be (0-255) | color_threshold: 10 |
| ignore_regions | Areas to exclude from comparison | ignore: ["timestamp", "ads"] |
Factors That Can "Spoil" Picture Checks
- • Dynamic test data (timestamps, IDs)
- • Environment differences (fonts, rendering)
- • Random elements (ads, recommendations)
- • Animation states (loading spinners)
- • Anti-aliasing differences
- • Screen resolution variations
Project Subscription: Reuse Across Projects
When multiple projects share common functionality, you do not want to duplicate test modules. Project Subscription lets you create a "library" of reusable test assets.
How Subscription Works
Core Library
login_actions
navigation_actions
common_checks
Project A
Subscribes to Core
Project B
Subscribes to Core
Project C
Subscribes to Core
Good Candidates for Sharing
- • Authentication actions (login, logout, password reset)
- • Navigation actions (menu, breadcrumb, search)
- • Common UI checks (error messages, notifications)
- • Data setup utilities (create test user, seed data)
Keep Project-Specific
- • Business-specific workflows
- • Custom UI components
- • Project-specific data formats
- • Unique validation rules
Test Suites: Organizing Execution
Dear Marilyn: We have hundreds of test modules. How do we decide which tests to run for a nightly build versus a full regression?
— Selecting in Seattle
Dear Selecting: You need Test Suites—logical groupings of test modules that can be executed together. There are two approaches to creating suites, and most teams use both.
Predefined Suites
Manually curated lists of test modules. You explicitly add or remove tests from the suite.
# Smoke Test Suite
• Login_Tests
• Dashboard_Load
• Critical_Workflow
• Logout_Tests
Query-Based Suites
Dynamic selection based on criteria. Tests are included if they match the query conditions.
# All Invoice Tests
WHERE module_name LIKE "Invoice%"
AND priority = "High"
AND last_modified >= "2024-01-01"
Common Suite Strategies
| Suite Type | When to Run | Selection Method | Duration |
|---|---|---|---|
| Smoke | Every commit | Predefined (10-20 critical tests) | 5-15 minutes |
| Nightly | Daily overnight | Query (all high + medium priority) | 2-4 hours |
| Full Regression | Weekly / Pre-release | Query (all active tests) | 8+ hours |
| Feature-Specific | After feature changes | Query (by module/tag) | Variable |
Module Summary
- Variations let you run one test across multiple configurations (browsers, languages, environments).
- Regular expressions enable pattern-based verification for dynamic content.
- Graphics testing handles visual verification through screenshots, regions, and OCR. Use absolute checks for shared images and relative checks for context-specific ones.
- Project subscription enables reuse of common test assets across multiple projects.
- Test Suites organize execution: use predefined suites for critical paths and query-based suites for dynamic selection.