Multi-task learning (MTL) trains a single model to solve multiple related tasks simultaneously, leveraging shared representations to improve generalization and sample efficiency across tasks. Multi-label learning tackles problems where each instance can be assigned multiple labels simultaneously (unlike multi-class classification, which assigns exactly one label). Both paradigms share a core insight: explicitly modeling relationships between outputs — whether tasks or labels — improves learning efficiency and prediction accuracy. The key challenge lies in balancing competing objectives: tasks can exhibit positive transfer (helping each other) or negative transfer (hurting performance), while labels can be positively correlated, negatively correlated, or independent. Successful approaches must adapt dynamically to these relationships during training.