Back to Search View Original Cite This Article

Abstract

<jats:p>The rapid adoption of microservice architecture has made application programming interfaces (APIs) the primary integration mechanism in modern software systems. Accordingly, the quality of API testing depends directly on the completeness and structure of API specifications available to testing engineers. In practice, however, the majority of projects document their APIs as informal plain text in corporate knowledge management systems - Confluence, Google Docs, Notion - without adhering to any unified standard. A systematic analysis of four current ISTQB syllabuses (CTFL v4.0.1, CTAL-TAE v2.0, CT-TAS v1.0, CT-AI v1.0) reveals that none of them defines the minimum required content for a textual endpoint description, despite recognising documentation quality as a measurable characteristic (FL-BO4). Existing research confirms the problem: Uddin and Robillard identified "incompleteness" as the most prevalent failure mode across API documentation, while Murphy et al. reported that specifications are "frequently missing, vague, or outdated" in real development teams. Machine-readable formats such as OpenAPI Specification address a different audience and assume technical knowledge of YAML or JSON, leaving the gap in informal human-readable documentation unresolved. The purpose of this study is to develop and validate the Minimal API Description Standard (MADS) - a structured 10-field template for plain-text API endpoint descriptions in corporate documentation tools - and to demonstrate its impact on software testing quality. MADS organises ten fields into four functional blocks: endpoint identification (HTTP method, URL pattern, operation name), input data (request parameters with types and constraints, request body), output data (successful response structure, error codes with conditions), and security context (authentication model, preconditions and business rules, API version). Fields are classified as mandatory or recommended. Each field is justified through convergent evidence from the scientific literature and practical security requirements (OWASP API Security Top 10). Empirical evaluation was conducted across three REST API endpoints of a typical order management service. Test cases were designed using two ISTQB-standard techniques: Boundary Value Analysis (BVA) and Equivalence Partitioning (EP). Three indicators were measured for both an unstructured description (UD) and a MADS-compliant description: the standard Requirement Coverage metric (RC, per ISTQB CTFL v4.0.1 section 5.3.1 and IEEE 829), the applicability of BVA and EP as a binary indicator per parameter, and the total number of test cases. Results show that RC increases from 23% (UD) to 100% (MADS), BVA/EP applicability rises from 25% to 100% of parameters, and the test case count grows from 5 to 26 — a 5.2-fold increase achieved exclusively through structured documentation - a 5.3-fold improvement achieved exclusively through structured documentation, without additional development resources. Response Code Coverage reached zero for all three endpoints under the unstructured condition, meaning negative test scenarios were entirely absent. The study further demonstrates that MADS serves as a structural prerequisite for reliable LLM-based test generation pipelines: structured MADS chunks improve RAG retrieval accuracy and enable deterministic resource access in Model Context Protocol (MCP) agentic architectures. The article proposes that the ISTQB Foundation Level Working Group consider incorporating minimum requirements for informal textual API descriptions into a future revision of the CTFL syllabus. Future research directions include automated MADS compliance validation, empirical correlation studies between MADS adoption and post-release defect rates, and extension of the standard to GraphQL and gRPC APIs.</jats:p>

Show More

Keywords

mads documentation test standard description

Related Articles

PORE

About

Connect