Module 10: Classical NLP Approaches
- Cleaning & preprocessing natural language
- Tokenization strategies
- Building text features: BoW, TF-IDF
- Performing Sentiment Analysis & Spam Detection
- Named Entity Recognition
Module 11: Embeddings & Representation Learning
- Word2Vec (CBOW & Skip-Gram)
- GloVe, FastText
- Contextual embeddings (ELMo, BERT)
- Sentence-level embeddings
- Subword tokenization frameworks
Module 12: RNN, LSTM & GRU Ecosystem
- Sequential modelling concepts
- Handling the vanishing gradient problem
- Building predictive text models using LSTM
- When to use GRU vs LSTM
Module 13: Seq2Seq & Attention Frameworks
- Encoder–Decoder workflows
- Attention mechanism explained simply
- Applications like translation & summarization
- Beam search for sequence generation
Module 14: Transformers — Complete Breakdown
- Self-attention explained visually
- Multi-head attention & positional encoding
- Modern transformer architectures:
- BERT
- GPT models
- RoBERTa
- T5 & BART
- DistilBERT & ALBERT