Imitation Learning (IL) enables agents to learn policies by observing and mimicking expert behavior, positioning itself as a practical alternative to reinforcement learning when reward engineering is difficult or when abundant expert demonstrations are available. Rather than requiring an explicit reward signal, IL methods extract patterns from state-action trajectories to train policies that replicate expert performance. A key challenge in IL is distributional shift—small errors compound as the learned policy visits states unseen during training, leading to divergence from expert trajectories. The field addresses this through interactive dataset aggregation (DAgger), adversarial methods (GAIL), and offline techniques that learn from fixed logged datasets without further environment interaction.