Bridging Standard Metrics & Enterprise Needs
ModelOp Center provides a comprehensive suite of out-of-the-box (OOTB) monitors. However, enterprise requirements often demand unique calculations. This guide introduces the capabilities of custom Python-based monitors.
code The Developer Value
Write standard Python code using Data Science libraries and have it automatically integrated into an enterprise-grade governance platform. No complex API integrations required.
verified_user The Governance Value
Ensures that no matter how complex the model metric, the "evidence" is always captured in a standardized, auditable format for risk and compliance.
Monitors by Model Modality
Custom monitors can be built to supplement any of these OOTB categories based on your domain.
Data Science Metrics Catalog
Hover or click on any metric pill below to view its algorithmic definition and dual-persona insights. These definitions guide standard and custom monitor creation.
1. Generative AI / NLP Validations
Secures conversational agents and generative models against hallucinations, toxic output, and data leakage.
2. Ethical Fairness & Bias
Evaluates model behavior disparities against protected classes to ensure regulatory compliance.
3. Regression & Credit Risk
Assesses continuous prediction errors and rank-ordering capabilities, crucial for financial/credit models.
4. Classification Performance
Evaluates discrete prediction models. Requires schemas mapping score_column and label_column.
5. Data & Concept Drift
Detects shifts in input distributions over time by comparing baseline data against a sample slice.
Enterprise Monitor Selection Pathway
Selecting the right AI monitor requires mapping the technical model type directly to enterprise business outcomes. Use this dynamic tree to navigate from your raw data state to the specific ModelOp monitors required.
manage_search Node Inspector
(Ctrl+Click for Multi)Model Selection
filter_alt Pathway Filters
Toggle model types below to dynamically generate the recommended monitor pathway branches.
Industry Context Presets
info Disclaimer: This is an enablement tool intended for guidance and is not to be used for final or automated business decision making.
Key Concepts: Execution Architectures
Interact with the architecture diagrams below. Toggle between standard monitoring and agentic LLM patterns.
manage_search Node Inspector
Ctrl+Click multiExecution Process
Select a node in the diagram to view its details.
- Process initiated (via UI/API)
- Metrics job created (via MLC)
- Job sent to Runtime
- Runtime loads datasets & code
- Runtime executes Python source
- Output yielded as JSON
- Model Test Result attached
manage_search Node Inspector
Ctrl+Click multiAgentic Architecture
Select a node in the diagram to view its details.
- Autonomous reasoning & planning
- Tool and API execution (MCP)
- Multi-agent orchestration
- Guardrail validations
- Continuous LLM monitoring
Artifact Explorer & Generator
A custom monitor is defined by specific files in a Git repository. Explore the required structure and generate contextual Data Science code boilerplates.
custom_metrics.py terminal
Primary Model Source Code
metadata.json data_object
Monitor Classification Meta
required_assets.json list_alt
Input Data Definitions
custom_metrics.py
The algorithmic brain. Use the pills below to generate boilerplate logic for different Data Science use cases.
Onboarding Roadmap
Follow this interactive guide to promote your custom monitor from a local IDE script into a production-ready ModelOp asset.
cloud_upload Connect Git Repository
expand_moreImport your custom code via the ModelOp UI. Navigate to Monitors > Add Monitor and select "Git".
- Provide the Repository URL and target Branch.
- Assign an Access Group to control viewing permissions.
- The system automatically scans for the
metrics()entry point.
camera Freeze a Snapshot
chevron_rightSnapshots create an immutable version of your monitor code linked to a specific commit.
This guarantees production stability, ensuring that subsequent commits to the Git branch don't silently alter or break actively scheduled tests.
link Map Data & Execute
chevron_rightAttach the monitor snapshot to your Business Model's "Monitoring" tab.
- The UI will prompt you to map specific data assets as defined in your
required_assets.json. - Click Play to spawn the Job and yield your metrics.