With the rapid development of artificial intelligence (AI) and machine learning technologies, the development of AI-based prediction models has become increasingly prevalent in the medical field. However, the PROBAST tool, which is used to evaluate prediction models, has shown growing limitations when assessing models built on AI technologies. Therefore, Moons and colleagues updated and expanded PROBAST to develop the PROBAST+AI tool. This tool is suitable for evaluating prediction model studies based on both artificial intelligence methods and regression methods. It covers four domains: participants and data sources, predictors, outcomes, and analysis, allowing for systematic assessment of quality in model development, risk of bias in model evaluation, and applicability. This article interprets the content and evaluation process of the PROBAST+AI tool, aiming to provide references and guidance for domestic researchers using this tool.
This study aims to introduce how to use the PROBAST (prediction model risk of bias assessment tool) to evaluate risk of bias and applicability of the study of diagnostic or prognostic predictive models, including the introduction of the background, the scope of application and use of the tool. This tool mainly involves the four areas of participants, predictors, outcomes and analyses. The risk of bias in the research is evaluated through the four areas, while the applicability is evaluated in the first three. PROBAST provides a standardized approach to evaluate the critical appraisal of the study of diagnostic or prognostic predictive models, which screens qualified literature for data analysis and helps to establish a scientific basis for clinical decision-making.
The development and validation of clinical prediction models based on artificial intelligence (AI) and machine learning (ML) methods have become increasingly widespread. However, the prediction model bias risk and applicability evaluation tool developed in 2019 (i.e., PROBAST-2019) has shown significant limitations. Therefore, an expanded and updated version of the PROBAST-2019 tool was released in 2025, known as the PROBAST+AI tool. The tool is divided into two parts including model development and model evaluation. It aims to comprehensively and systematically evaluate potential methodological quality issues in model development, bias risks in model evaluation, and the applicability of models, regardless of the modeling method used. This paper provides a systematic interpretation of the PROBAST+AI tool's items and case analyses, with the aim of guiding and assisting researchers engaged in related studies and promoting the high-quality development of clinical predictive model research.