MLflow currently provides strong support for experiment tracking, model registry, and LLM/GenAI evaluation via third-party dependencies. Adding native support for predictive ML monitoring metrics like data drift, data quality, and model quality would enhance the platform's capabilities and reduce reliance on external tools.
<!-- issue-warning --> > [!WARNING] > Before submitting a PR, please make sure that: > - A maintainer has triaged this issue and applied the `ready` label > - This issue has no assignee > - No duplicate PR exists > > PRs not meeting these requirements may be automatically closed. ### Willingness to contribute No. I cannot contribute this feature at this time. ### Proposal Summary MLflow currently provides strong support for experiment tracking, model registry, and LLM/GenAI evaluation via the mlflow.genai and mlflow.evaluate() APIs. However, for predictive ML production monitoring specifically data drift detection, data quality validation, and model quality tracking teams must import and orchestrate external libraries such as EvidentlyAI alongside MLflow, increasing solution complexity, dependency surface, and maintenance overhead. This feature request proposes extending MLflow's native evaluation and logging APIs to support predictive ML monitoring metrics as first-class citizens