Skip to main content

Menu

LEVEL 0
0/5 XP
HomeAboutTopicsPricingMy VaultStats

Categories

🤖 Artificial Intelligence
☁️ Cloud and Infrastructure
💾 Data and Databases
💼 Professional Skills
🎯 Programming and Development
🔒 Security and Networking
📚 Specialized Topics
HomeAboutTopicsPricingMy VaultStats
LEVEL 0
0/5 XP
GitHub
© 2026 CheatGrid™. All rights reserved.
Privacy PolicyTerms of UseAboutContact

MLflow Experiment Tracking and Model Registry Cheat Sheet

MLflow Experiment Tracking and Model Registry Cheat Sheet

Back to Data ScienceUpdated 2026-05-15

MLflow is an open-source platform for managing the complete machine learning lifecycle, including experiment tracking, reproducible runs, and model packaging. It provides unified APIs for logging parameters, metrics, and artifacts across frameworks, a Model Registry for versioning and promoting models through stages, and flexible deployment options from local serving to cloud platforms. A key insight: MLflow's autologging can capture most training metadata automatically, but custom logging gives you fine-grained control over exactly what gets tracked—and understanding both approaches ensures you capture the right information without clutter.

What This Cheat Sheet Covers

This topic spans 20 focused tables and 142 indexed concepts. Below is a complete table-by-table outline of this topic, spanning foundational concepts through advanced details.

Table 1: Run Management and ContextTable 2: Logging Parameters, Metrics, and ArtifactsTable 3: Automatic Logging (Autologging)Table 4: Model Logging and FlavorsTable 5: Model Signatures and Input ExamplesTable 6: Model RegistryTable 7: Experiment and Run SearchTable 8: Dataset TrackingTable 9: MLflow Tracing for LLMs and AgentsTable 10: MLflow ProjectsTable 11: Model Serving and DeploymentTable 12: Tracking Server SetupTable 13: Model EvaluationTable 14: System Metrics and MonitoringTable 15: MLflow Client APITable 16: Custom PyFunc ModelsTable 17: Tags and MetadataTable 18: UI and VisualizationTable 19: Export and MigrationTable 20: Authentication and Security

Table 1: Run Management and Context

FunctionExampleDescription
mlflow.start_run
with mlflow.start_run():
mlflow.log_param("alpha", 0.5)
Context manager that creates a new run or resumes an existing run; returns run object with run_id and experiment_id; supports nesting for parent-child relationships.
mlflow.active_run
run = mlflow.active_run()
run_id = run.info.run_id
Returns the currently active run object or None; useful for retrieving run_id inside a context without explicit assignment.
mlflow.end_run
mlflow.end_run()
Explicitly ends the currently active run; automatically called when exiting start_run context manager; useful for manual run termination.

More in Data Science

  • Missing Data Analysis and Imputation Cheat Sheet
  • Monte Carlo Simulation Cheat Sheet
  • AB Testing and Online Experimentation Cheat Sheet
  • Design of Experiments (DOE) Cheat Sheet
  • OpenRefine Cheat Sheet
  • SciPy Cheat Sheet
View all 47 topics in Data Science