Open AI Testing

AI means prompts and skills are interesting--not code.

Open Source AI Skills

Open source AI testing agents and skills that find bugs, generate user feedback, and create test cases

View on GitHub ↗

Antigravity IDE
Claude Code
Claude CoWork
Claude Chat

AI-First Browser Automation

The future of browser automation for AI agents and developers

Vibium Logo

Vibium

AI-first 'Selenium 5' being built by the same person that created Selenium. Even being vibe coded using Claude.

Created by Jason Huggins

⚠️ Note: Ready for experimentation but not yet ready with backwards compatibility or Selenium parity in functionality

Browse the Open Testing Agents

Test Prompts

Analysis of static artifacts like screenshots, logs, DOMs, and content

Sharon - Security Tester
Pete - Privacy Tester
Mia - Usability Tester
Jason - GenAI Code Tester
Alejandro - Accessibility Tester
Fatima - Error Message Tester
Sophia - Content Quality Tester
Tariq - Performance Tester
Hiroshi - WCAG Compliance Tester
Marcus - OWASP Security Tester
Zanele - GDPR Compliance Tester
Mei - Search Box Tester
Diego - AI Chatbot Tester
Leila - Search Results Tester
Kwame - Product Details Tester
Zara - News Content Tester
Priya - Shopping Cart Tester
Yara - Social Profiles Tester
Hassan - Checkout Tester
Amara - Social Feed Tester
Yuki - Homepage & Landing Pages Tester
Anika - Contact Page Tester
Mateo - Pricing Page Tester
Zoe - About Page Tester
Zachary - Video Player Tester
Sundar - Legal Policies Tester
Samantha - Careers & Jobs Tester
Richard - Forms Tester
Ravi - Booking Tester
Rajesh - Cookie Consent Management Tester

How to Run Tests

testers.ai makes it as easy as a single click to run open and your own custom test agents against your website

ChatGPT
Claude
Gemini
Copilot

Manually via AI ChatBot

Copy prompt text from any testing agent below and paste into your AI chatbot, then attach the respective artifacts (screenshots, logs, network activity, DOM, etc.)

AI Bug Reports

Traditional bug reports are often incomplete, lacking context, and require significant investigation. Open AI-Generated bug reports provide comprehensive analysis with actionable solutions.

Key Goals of OpenTest.AI Bug Format

Maximum Context

Include as much relevant context as possible - console logs, network calls, page elements, and user interactions.

AI Fix Prompts

Provide an AI-prompt that can be used to fix the issue, making it easy for developers to understand and implement solutions.

Balanced Analysis

Argue for and against why this is a bug, providing comprehensive reasoning from multiple perspectives.

Priority Judgment

Automatically determine issue priority based on impact, frequency, and user experience considerations with detailed reasoning.

Stateful Reviews

Issues are stateful with human ratings, comments, and expert review capabilities for collaborative improvement.

Smart Routing

Automatically suggest which type of engineer should handle each issue for optimal resolution.

OpenTest.AI Bug Schema Fields

Each bug object in the new OpenTest.AI format output contains the following comprehensive information:

Field Type Description
bug_titlestringShort, descriptive title of the bug
bug_typearrayCategories (e.g. "usability", "WCAG", "security")
bug_confidenceinteger1–10 score reflecting confidence it's a real bug
bug_priorityinteger1–10 score indicating impact/severity
bug_reasoning_why_a_bugstringExplanation of why this is considered a bug
bug_reasoning_why_not_a_bugstringCounterargument, acknowledging uncertainty
suggested_fixstringRecommended fix or mitigation strategy
bug_why_fixstringJustification for why this should be fixed
what_type_of_engineer_to_route_issue_tostringSuggested role (e.g. "Frontend Engineer")
possibly_relevant_page_console_textstring/nullCaptured browser console text (if relevant)
possibly_relevant_network_callstring/nullRelevant network request URL
possibly_relevant_page_textstring/nullSnippet of page text related to the bug
possibly_relevant_page_elementsstring/nullDOM element info (e.g. tag, href, id)
testerstringName of the human/AI tester who found it
bylinestringTitle or role of the tester
image_urlstring(Optional) Image avatar of the tester

Raw JSON Schema

{
  "bug_title": "string",
  "bug_type": "array",
  "bug_confidence": "integer",
  "bug_priority": "integer",
  "bug_reasoning_why_a_bug": "string",
  "bug_reasoning_why_not_a_bug": "string",
  "suggested_fix": "string",
  "bug_why_fix": "string",
  "what_type_of_engineer_to_route_issue_to": "string",
  "possibly_relevant_page_console_text": "string|null",
  "possibly_relevant_network_call": "string|null",
  "possibly_relevant_page_text": "string|null",
  "possibly_relevant_page_elements": "string|null",
  "tester": "string",
  "byline": "string",
  "image_url": "string"
}

Community

Help expand our collection of AI testing agents by submitting your own specialized testing agent. Join our community of developers building the future of automated testing.

🏆 Top Experts

Jason Arbon
🥇 1
23 Agent Definitions
Expert
🥈
hopefully you

leader board updated monthy and you can share a badget on your linked in

Join the OpenTest.AI Community

Become a member of our growing community of AI testing professionals

early but growing fast!

100s

Community Members

23

Test Promptss

44

Dynamic Agents

1000s

Webpages Tested

Contribute to the Community

Help expand our collection of AI testing agents

Submit Test Prompts Check

Static prompts analyze static artifacts like screenshots, logs, DOMs, and content.

Submit Dynamic Agent Check Definition

Dynamic prompts are interactive tests with sequences of actions and assertions.

Our Sponsors

We're grateful to our sponsors who help make OpenTest AI possible. Their support enables us to provide free resources and tools to the testing community.

Bio

OpenTest.AI Community Charter

Jason Arbon
Community Founder
×

Board Members

Jason Arbon

President & Founding Board Member

LinkedIn

Phil Lew

Founding Board Member

LinkedIn

Jonathon Wright

Founding Board Member

LinkedIn

1. Mission & Purpose

OpenTest AI exists to advance the practice of testing in two major areas:

  • Testing AI-Based Systems – developing methods, strategies, and resources to evaluate the safety, quality, and reliability of AI models and applications (e.g., LLMs, generative AI systems, and ML-powered services).
  • Applying AI to Test Other Systems – using AI-driven approaches, tools, and agents to improve the testing of traditional software, platforms, and digital products.

The community provides a free, accessible, and collaborative environment where practitioners and researchers can share strategies, resources, and tools across both domains.

The community aims to:

  • Create and share AI quality strategies for both AI-based systems and AI-enabled testing.
  • Provide resources such as prompt libraries, evaluation suites, test plans, and AI-driven testing utilities.
  • Advance best practices for AI quality, safety, and reliability in both categories.
  • Discuss future-facing issues before they become widespread, spanning both how AI is tested and how AI changes testing itself.
  • Explore non-technical topics, including how teams adopt AI responsibly in development and quality workflows.

OpenTest AI is built on the principle of being practical, rigorous, and grounded, avoiding hype and self-promotion in favor of contributions that are universally useful.

2. Community Values

  • Practicality: Share strategies and tools that can be applied to both testing AI and using AI for testing.
  • Openness: Ensure resources are free and accessible to all.
  • Rigor: Promote practices backed by evidence, research, or real-world application.
  • Integrity: Avoid marketing-driven agendas or personal recognition-seeking.
  • Collaboration: Encourage contributions across both domains from academia, industry, and practitioners.

3. Governance Structure

3.1 Founding President

Jason Arbon serves as the Founding President of OpenTest AI.

  • The President has authority to appoint and remove Board members.
  • The President holds 51% of all voting power and maintains formal veto power over Board and community decisions, including membership and sponsorship.
  • This structure is intended as a temporary stewardship model to protect the mission and ensure stability in the early stages.

3.2 Board of Directors

  • Board members are appointed by the President.
  • Responsible for guiding direction, governance, and community alignment.
  • Meet monthly, with topic-specific sessions as needed.

4. Data Licensing

All user-contributed content (test cases, bug reports, prompts, LLM tests, and ratings) is shared under the CC0 1.0 Universal (Public Domain Dedication) license.

  • Contributors dedicate their content to the public domain, waiving all copyright and related rights.
  • Anyone can freely use, modify, distribute, and build upon this content for any purpose, without attribution or restrictions.
  • This permissive licensing model encourages maximum reuse and collaboration within the testing community.
  • By submitting content, contributors confirm they have the right to dedicate it to the public domain.

4.1 Disclaimer & Limitation of Liability

THE DATA AND CONTENT ON THIS SITE ARE PROVIDED "AS IS" WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, ACCURACY, COMPLETENESS, OR NON-INFRINGEMENT.

No Warranty: OpenTest AI, its operators, contributors, and sponsors make no representations or warranties regarding the accuracy, reliability, completeness, or suitability of any test cases, bug reports, analysis, prompts, or other content on this site.

User Responsibility: Users are solely responsible for evaluating, validating, and verifying any content before use. All content should be reviewed and tested in appropriate environments before application to production systems.

No Liability: To the fullest extent permitted by law, OpenTest AI, its operators, contributors, sponsors, and affiliates shall not be liable for any direct, indirect, incidental, special, consequential, or punitive damages arising from the use, misuse, or inability to use any content from this site, including but not limited to damages for loss of profits, data, business interruption, or other intangible losses.

No Endorsement: The presence of content on this site does not constitute an endorsement, recommendation, or guarantee of its accuracy, effectiveness, or safety. Users should exercise independent judgment and professional expertise when applying any content.

Third-Party Content: Content is provided by community contributors. OpenTest AI does not verify, validate, or guarantee the accuracy, completeness, or safety of user-contributed content.

Use at Your Own Risk: By using any content from this site, you acknowledge that you do so at your own risk and agree to hold OpenTest AI and its operators harmless from any claims, damages, or liabilities arising from such use.

5. Membership & Participation

5.1 Lurkers

  • Open to all, no registration required.
  • Free access to resources and open-source artifacts.

5.2 Members

  • Open to all with free registration.
  • Benefits: access to updates, ability to rate artifacts, public membership badges.
  • Expectations: rate artifacts, maintain civility, avoid hype or spam.

5.3 Contributors

  • Submit artifacts (prompt libraries, evaluation suites, strategies, tools).
  • Contributions reviewed to ensure contributor owns or has rights to share, contributions are open-sourced, and no proprietary or confidential content is included.
  • Contributors must provide a real name and a public profile link (e.g., LinkedIn, GitHub, website).
  • This attribution is published alongside the artifact to ensure accountability, discourage spam or malicious submissions, and provide important context for evaluating contributions.

6. Code of Conduct (Short Form)

  • Be Civil & Respectful – Treat others constructively.
  • No Hype or Spam – Avoid self-promotion or unsubstantiated claims.
  • Stay Practical & Useful – Focus on actionable, broadly valuable contributions.
  • Share Responsibly – Only submit content you own or can open-source.
  • Openness & Integrity – Keep the mission centered on AI quality.

7. Benefits & Recognition

  • Lurkers: Free use of shared resources.
  • Members: Membership badges, ratings milestone badges, opt-in updates.
  • Contributors: Contributor badges, peer-reviewed recognition, public attribution (name + link), résumé visibility.
  • Board Members: Elevated badges and leadership recognition.
  • Sponsors & Technical Partners: Public acknowledgment for enabling community value.

8. Deliverables & Outputs

OpenTest AI produces and maintains resources that are practical, open, and reusable across both categories:

  • Prompt Libraries – for probing AI models and for generating AI-assisted tests.
  • Evaluation Suites & Test Plans – for assessing AI quality and for validating AI-based test automation.
  • Strategies & Best Practices – covering both how to test AI systems and how to adopt AI in testing workflows.
  • Case Studies & Reports – real-world examples from both focus areas.
  • Research & News Updates – covering developments in both AI-system evaluation and AI-assisted QA.
  • Community Events & Workshops – bringing together practitioners from both categories.
  • Member Directory – recognition of contributors and their domain of expertise.

9. Community Platforms & Infrastructure

  • Website – central hub for artifacts and updates.
  • Discord Forum – main space for discussions and collaboration.
  • LinkedIn Community – professional networking and outreach.
  • Twitter/X – updates, highlights, announcements.
  • GitHub (future) – long-term repository for open-source contributions.

10. Growth Roadmap

  • Stage 1 – Stewardship: testers.ai stewardship, ensuring balance between the two domains.
  • Stage 2 – Shared Governance: growth in contributors and sponsors across both categories, with balanced recognition.
  • Stage 3 – Independence: nonprofit/foundation spin-off, positioned as the global hub for both testing AI and AI in testing.

Join

×