Lecture 1 - Introduction and Recent Advances
Lecture 2 - Introduction to Natural Language Processing
Lecture 3 - Introduction to Statistical Language Models
Lecture 4 - Statistical LM: Advanced Smoothing and Evaluation
Lecture 5 - Introduction to Deep Learning
Lecture 6 - Introduction to PyTorch
Lecture 7 - Word Representation: Word2Vec and fastText
Lecture 8 - Word Representation: GloVe
Lecture 9 - Tokenization Strategies
Lecture 10 - Neural Language Models: CNN and RNN
Lecture 11 - Neural Language Models: LSTM and GRU
Lecture 12 - Sequence-to-Sequence Models
Lecture 13 - Decoding Strategies
Lecture 14 - Attention in Sequence-to-Sequence Models
Lecture 15 - Introduction to Transformer: Self and Multi-Head Attention
Lecture 16 - Introduction to Transformer: Positional Encoding and Layer Normalization
Lecture 17 - Implementation of Transformer using PyTorch
Lecture 18 - Pre-Training Strategies: ELMo, BERT
Lecture 19 - Pre-Training Strategies: Encoder-decoder and Decoder-only Models
Lecture 20 - Introduction to Hugging Face
Lecture 21 - Instruction Tuning
Lecture 22 - Prompt-based Learning
Lecture 23 - Advanced Prompting and Prompt Sensitivity
Lecture 24 - Alignment of Language Models - I
Lecture 25 - Alignment of Language Models - II
Lecture 26 - Knowledge and Retrieval: Knowledge Graph
Lecture 27 - Knowledge and Retrieval: Knowledge Graph Completion and Evaluation
Lecture 28 - Knowledge and Retrieval: Translation and Rotation Models
Lecture 29 - Parameter Efficient Fine-Tuning (PEFT)
Lecture 30 - Quantization, Pruning and Distillation
Lecture 31 - An Alternate Formulation of Transformers: Residual Stream Perspective
Lecture 32 - Interpretability Techniques
Lecture 33 - Knowledge and Retrieval: Multiplicative models
Lecture 34 - Knowledge and Retrieval: Modeling Hierarchies
Lecture 35 - Knowledge and Retrieval: Temporal Knowledge Graphs
Lecture 36 - Responsible LLMs
Lecture 37 - Conclusion: Expert Panel Discussion