Weights & Biases (W&B) is an MLOps platform for tracking machine learning experiments, managing artifacts, orchestrating hyperparameter sweeps, and collaborating across teams. W&B captures metrics, hyperparameters, model checkpoints, and datasets throughout the ML lifecycle, providing visual dashboards, automated comparisons, and reproducibility tools. It integrates seamlessly with popular frameworks (PyTorch, TensorFlow, scikit-learn, Hugging Face) and scales from local development to distributed training. Unlike purely local tools, W&B centralizes experiment history in a cloud or self-hosted backend, enabling teams to compare runs, share insights through Reports, and promote models via Registry—making it essential for collaborative model development and production workflows.
What This Cheat Sheet Covers
This topic spans 12 focused tables and 106 indexed concepts. Below is a complete table-by-table outline of this topic, spanning foundational concepts through advanced details.
Table 1: Run Initialization and Configuration
| Method | Example | Description |
|---|---|---|
with wandb.init(project="my-project") as run: pass | Initialize a W&B run to track experiments in the specified project. Use context manager to auto-finalize. | |
wandb.init(entity="team-name", project="proj") | Start a run under a specific W&B team or organization entity instead of personal account. | |
wandb.init(config={"learning_rate": 0.01, "epochs": 10}) | Pass hyperparameters and configuration as a dictionary; accessible via run.config. | |
wandb.init(job_type="train") | Tag run with a job type (e.g., "train", "eval", "preprocess") to organize runs by workflow stage. | |
wandb.init(tags=["baseline", "v1.0"]) | Attach searchable tags to a run for filtering and grouping in the UI. | |
wandb.init(notes="Experiment with dropout=0.5") | Add human-readable notes describing the run's purpose or changes. |