Skip to Main Content

AI-Powered Test Generation: From Theory to Hands-On Practice

Whether you work in automotive, industrial automation, robotics, railway systems or aerospace and defense, this workshop shows you how to use AI-based testing effectively and how to keep AI-generated tests explainable, deterministic and economically reliable. You will learn how to gain the benefits of AI without losing transparency or control.

Venue:
Virtual

 

 

Date/Time:
January 21, 2026
11:00 AM - 2:00 PM EST

Date/Time:
January 28, 2026
9:00 AM - 12:00 PM CET

Overview

A three-hour interactive session that takes you from the real problems of AI-generated tests to hands-on mastery in a live cloud-hosted environment. You will learn why AI-generated tests fail, how hallucinations slip into expected results, how to detect them, and how the right test architecture turns AI from a risk into a reliable accelerator.

 

What Attendees Will Learn

  • Understand where AI-generated tests truly excel, especially rapid input generation, and why they fail when expected outcomes are unclear or inconsistent.
  • Learn how explainability is created through a 3-layer architecture that cleanly separates stimulation, system behavior and behavioural judgement. This makes AI-generated tests reviewable, debuggable and maintainable.
  • Apply the Full-Expectation-YET method to close the gap between stimulation and verification. This ensures that all relevant outputs are validated over time, not just triggered.
  • Identify and control the biggest economic and safety risk in AI-based testing: false positives. These are tests that appear correct while hiding real defects.
  • See how automatic traceability keeps regenerated tests and expectations consistently linked to their origins as software evolves.
  • Experience all concepts directly in TPT and try them yourself, including AI-based test data generation, execution and analysis.
  • Learn how to analyse unexpected results and reliably distinguish genuine defects from wrong expectations, missing expectations and logical inconsistencies in AI-generated tests.

Presenter

Stefan Lachmann

AI Engineering Era Thumbnail