Overview
In our previous lesson on RNNs, LSTMs, and GRUs, we explored the sequential approach to modeling language. While these architectures revolutionized NLP, they still suffered from fundamental limitations in handling long-range dependencies and parallelization.
This lesson introduces the Transformer architecture, a paradigm shift that replaced recurrence with attention mechanisms. First introduced in the seminal 2017 paper "Attention Is All You Need" by Vaswani et al., Transformers have become the foundation of modern NLP models like BERT, GPT, and T5 that have dramatically advanced the state of the art.
Learning Objectives
After completing this lesson, you will be able to:
- Understand the key innovations and motivations behind the Transformer architecture
- Explain self-attention and multi-head attention mechanisms in detail
- Describe positional encoding and why it's necessary
- Compare encoder-only, decoder-only, and encoder-decoder transformer variants
- Implement basic transformer components
- Recognize how transformers enable modern language models
The Need for a New Architecture
The Limitations of RNNs Revisited
As we saw in the previous lesson, RNNs and their variants face several critical limitations:
- Sequential Processing: Processing tokens one at a time creates a bottleneck for training and inference
- Limited Context Window: Even LSTMs struggle with very long-range dependencies
- Vanishing Gradients: Despite improvements, still an issue for very long sequences
Analogy: Information Highways vs. Relay Races
Think of an RNN as a relay race where information is passed from one runner (time step) to the next. If the race is long, messages can get distorted or lost along the way, and the entire race is only as fast as the slowest runner.
In contrast, a Transformer is like a highway system where every location has direct high-speed connections to every other location. Information doesn't have to flow sequentially but can take direct routes, and all routes can be traveled simultaneously.
The Transformer Architecture: A High-Level View
Architectural Overview
Explore Transformers
Overview
Analysis
Architecture
Mechanisms
Training
Advanced
Transformer Architecture Overview
Complete visualization of the encoder-decoder architecture with data flow
Encoder Stack
Decoder Stack
• Parallel processing
• Context understanding
• Source-target alignment
• Information transfer
• Autoregressive generation
• Sequential output
Key Innovations
The Transformer introduced several groundbreaking innovations:
- Self-Attention: Allows each position to directly attend to all positions
- Multi-Head Attention: Enables attention across different representation subspaces
- Positional Encoding: Captures sequence order without recurrence
- Residual Connections + Layer Normalization: Facilitates training of deep networks
- Feed-Forward Networks: Adds non-linearity and transforms representations
- Parallel Processing: Enables efficient training and inference
Self-Attention: The Core Mechanism
Understanding Attention
Attention allows a model to focus on relevant parts of the input sequence when making predictions. It computes a weighted sum of values, where weights reflect the relevance of each value to the current context.
The Intuition Behind Self-Attention
Explore Transformers
Overview
Analysis
Architecture
Mechanisms
Training
Advanced
Attention Pattern Visualizer
Understanding the difference between self-attention and cross-attention
Self-Attention: "The cat sat on the mat"
How Self-Attention Works:
Quick Comparison
Self-Attention
- • Q, K, V from same sequence
- • Tokens attend to each other
- • Builds contextual representations
- • Used in: BERT, GPT, encoder/decoder layers
Cross-Attention
- • Q from decoder, K,V from encoder
- • Decoder attends to encoder
- • Connects two sequences
- • Used in: T5, BART, translation models
In the example above, to understand what "it" refers to, the model must determine which previous words are most relevant. Self-attention allows the model to learn these relevance patterns.
Query, Key, Value (QKV) Framework
Self-attention can be conceptualized using the Query-Key-Value framework:
- Query (Q): What we're looking for
- Key (K): What we match against
- Value (V): What we retrieve if there's a match
Think of it as a sophisticated dictionary lookup:
- The Query is like your search term
- The Keys are like the dictionary entries
- The Values are the definitions you retrieve
Self-Attention Computation: Step-by-Step
-
Projection: Generate Query, Key, and Value vectors by multiplying input embeddings by weight matrices
-
Score Calculation: Compute attention scores by multiplying Q and K matrices
-
Scaling: Divide by square root of dimension to prevent extremely small gradients
-
Masking (Decoder Only): Apply mask to prevent attending to future positions (for decoder)
-
Softmax: Apply softmax to get probability distribution across values
-
Weighted Sum: Multiply attention weights by values
Visualizing Self-Attention
Explore Transformers
Overview
Analysis
Architecture
Mechanisms
Training
Advanced
Self-Attention Mechanism
Input Sequence
Step 1: Create Query, Key, Value Matrices
We first project the input embeddings into three different spaces:
- Query (Q): What we're looking for
- Key (K): What we match against
- Value (V): What we retrieve if there's a match
Each token gets its own Q, K, and V vectors.
Query Matrix (Q)
Key Matrix (K)
Value Matrix (V)
Multi-Head Attention: Attending to Different Aspects
Why Multiple Attention Heads?
Self-attention with a single attention mechanism (or "head") can only capture one type of relationship between words. But language has many types of relationships (syntactic, semantic, referential, etc.).
Multiple attention heads allow the model to:
- Attend to different representation subspaces simultaneously
- Capture different types of dependencies (e.g., syntactic vs. semantic)
- Create a richer representation by combining these diverse perspectives
Multi-Head Attention Mechanism
💡 Tip: Use the attention patterns mode in the tool above to see how different heads capture different relationships.
Mathematical Formulation
For each head :
The outputs from all heads are concatenated and linearly transformed:
Analogy: Multiple Expert Consultants
Think of multi-head attention as consulting multiple experts who each focus on different aspects of a problem:
- One linguist focuses on grammar
- Another focuses on vocabulary
- A third focuses on cultural context
- A fourth focuses on tone
Each provides valuable insights from their perspective, and together they create a more comprehensive understanding than any single expert could provide.
Positional Encoding: Preserving Sequence Order
The Problem: Transformers Don't Know Position
Unlike RNNs, the self-attention mechanism is inherently permutation-invariant—it doesn't consider the order of tokens. This is a problem because word order is crucial in language understanding.
For example, these sentences have very different meanings despite using the same words:
- "The dog chased the cat"
- "The cat chased the dog"
Solution: Positional Encoding
Transformers add positional information to each word embedding using sinusoidal functions:
Where:
- is the position
- is the dimension
- is the embedding dimension
Visualizing Positional Encoding
Explore Transformers
Overview
Analysis
Architecture
Mechanisms
Training
Advanced
Positional Encoding Visualizer
Visualization of sinusoidal positional encodings
Position vs Dimension Heatmap
Darker colors indicate higher values
Sin/Cos Waves by Dimension
Key Properties of Sinusoidal Positional Encoding
- Unique Pattern: Each position gets a unique encoding
- Fixed Offset: The relative encoding between positions at a fixed offset is constant
- Extrapolation: Can generalize to longer sequences than seen in training
- No New Parameters: Unlike learned positional embeddings, requires no additional parameters
Embedding + Positional Encoding
The final input to the transformer is the sum of the word embeddings and the positional encodings:
Key Insight: The same positional encoding visualization above shows how these encodings combine with word embeddings to create the final input representation.
The Building Blocks: Encoder and Decoder
Transformer Encoder
The encoder processes the input sequence and consists of:
- Multi-Head Self-Attention: Each position attends to all positions
- Feed-Forward Neural Network: A two-layer network with ReLU activation
- Residual Connections: Helps gradient flow and stabilizes training
- Layer Normalization: Normalizes inputs to each sub-layer
Explore Transformers
Overview
Analysis
Architecture
Mechanisms
Training
Advanced
Transformer Encoder Visualizer
Detailed visualization of encoder with multi-head attention and feed-forward networks
Feed-Forward Network (FFN)
The FFN applies the same transformation to each position independently:
This is equivalent to two dense layers with a ReLU activation in between. The FFN allows the model to transform its representations and introduces non-linearity.
Transformer Decoder
The decoder generates the output sequence and has three main components:
- Masked Multi-Head Self-Attention: Each position attends only to previous positions
- Cross-Attention: Attends to the encoder's output
- Feed-Forward Neural Network: Same structure as in the encoder
Explore Transformers
Overview
Analysis
Architecture
Mechanisms
Training
Advanced
Transformer Decoder Visualizer
Detailed visualization of decoder with masked attention, cross-attention, and feed-forward networks
Masking in the Decoder
The decoder must generate text autoregressively (one token at a time), so it can't "see" future tokens during training. This is achieved using a look-ahead mask.
💡 Tip: Use the self-attention mode in the tool above to see how masking prevents the decoder from attending to future positions.
Common pitfalls:
- Off-by-one masking: ensure strictly upper-triangular mask so position t cannot attend to ≥ t.
- Padding mask mixing: combine causal mask with key padding mask correctly to avoid leaking pads.
Cross-Attention
Cross-attention allows the decoder to focus on relevant parts of the input sequence:
Explore Transformers
Overview
Analysis
Architecture
Mechanisms
Training
Advanced
Cross-Attention Mechanism
Cross-attention allows the decoder to focus on relevant parts of the encoder's output sequence when generating each token.
Encoder Output Sequence:
Decoder Output Sequence:
Cross-Attention Weights:
Decoder \ Encoder | The | transformer | architecture | is | powerful |
---|---|---|---|---|---|
L' | 0.26 | 0.09 | 0.11 | 0.32 | 0.22 |
architecture | 0.06 | 0.14 | 0.64 | 0.08 | 0.07 |
de | 0.24 | 0.34 | 0.12 | 0.00 | 0.29 |
transformer | 0.07 | 0.61 | 0.17 | 0.07 | 0.08 |
est | 0.31 | 0.12 | 0.21 | 0.17 | 0.20 |
puissante | 0.25 | 0.14 | 0.18 | 0.18 | 0.25 |
Word Alignment:
How Cross-Attention Works
Cross-attention is a key component that connects the encoder and decoder in the Transformer architecture:
- The encoder processes the input sequence and produces representations for each token
- For each token it generates, the decoder needs to focus on relevant parts of the input
- Cross-attention computes compatibility between the current decoder state and all encoder outputs
- This allows the decoder to dynamically attend to different parts of the input as needed
- The attention weights determine how much each input token influences the current output token
The Full Architecture: Putting It All Together
Complete Transformer Architecture
Now that we've explored the individual components, let's see how they work together in the complete architecture. Use the overview mode in the tool above to see the full encoder-decoder stack, or explore other modes to dive deeper into specific mechanisms.
Training the Transformer
Transformers are typically trained with:
- Teacher forcing: Using ground truth as decoder input during training
- Label smoothing: Preventing overconfidence by softening the target distribution
- Learning rate scheduling: Using warmup and decay for optimal convergence
- Large batch sizes: Stabilizing training with more examples per update
Computational Complexity
The self-attention mechanism has quadratic complexity with respect to sequence length:
Where:
- is the sequence length
- is the representation dimension
This can be a limitation for very long sequences, leading to various efficient transformer variants that reduce this complexity.
Transformer Variants: Encoder-Only, Decoder-Only, and Encoder-Decoder
Transformer Variants: Architecture Comparison
The transformer architecture has evolved into three main variants, each optimized for different types of tasks:
Explore Transformers
Overview
Analysis
Architecture
Mechanisms
Training
Advanced
Transformer Model Comparison
Select Models to Compare
Architecture Types
Encoder-Only
Examples: BERT, RoBERTa
Decoder-Only
Examples: GPT, LLaMA
Encoder-Decoder
Examples: T5, BART
Encoder-Only Models
BERT
2018Bidirectional Encoder Representations from Transformers
Devlin et al. (Google)
Parameters
110M - 340M
Pre-training
Masked Language Modeling (MLM), Next Sentence Prediction (NSP)
Architecture
- 12 - 24 encoder layers
- Hidden Size: 768 - 1024
- 12 - 16 attention heads
Applications
Classification, NER, Question Answering, Sentence Similarity
Key Characteristics
Bidirectional context, static predictions
RoBERTa
2019Robustly Optimized BERT Pretraining Approach
Liu et al. (Facebook AI)
Parameters
125M - 355M
Pre-training
Masked Language Modeling (MLM) only, no NSP
Architecture
- 12 - 24 encoder layers
- Hidden Size: 768 - 1024
- 12 - 16 attention heads
Applications
Classification, NER, Question Answering, Sentence Similarity
Key Characteristics
Improved training procedure, larger batch sizes, more data
DistilBERT
2019Distilled BERT
Sanh et al. (Hugging Face)
Parameters
66M
Pre-training
Knowledge distillation from BERT, MLM
Architecture
- 6 encoder layers
- Hidden Size: 768
- 12 attention heads
Applications
Lightweight classification, Mobile applications
Key Characteristics
40% smaller, 60% faster, retains 97% performance
Decoder-Only Models
GPT
2018Generative Pre-trained Transformer
Radford et al. (OpenAI)
Parameters
117M
Pre-training
Autoregressive Language Modeling
Architecture
- 12 decoder layers
- Hidden Size: 768
- 12 attention heads
Applications
Text generation, Completion, Classification (with prompting)
Key Characteristics
Unidirectional (left-to-right) context
GPT-2
2019Generative Pre-trained Transformer 2
Radford et al. (OpenAI)
Parameters
124M - 1.5B
Pre-training
Autoregressive Language Modeling on diverse corpus
Architecture
- 12 - 48 decoder layers
- Hidden Size: 768 - 1600
- 12 - 25 attention heads
Applications
Text generation, Completion, Zero-shot task performance
Key Characteristics
Larger context window, Zero-shot capabilities
GPT-3
2020Generative Pre-trained Transformer 3
Brown et al. (OpenAI)
Parameters
175B
Pre-training
Autoregressive Language Modeling, Massive dataset
Architecture
- 96 decoder layers
- Hidden Size: 12288
- 96 attention heads
Applications
Few-shot learning, Code generation, Translation, Writing
Key Characteristics
Emergent few-shot abilities, API access only
Encoder-Decoder Models
T5
2019Text-to-Text Transfer Transformer
Raffel et al. (Google)
Parameters
60M - 11B
Pre-training
Span corruption, Unified text-to-text format
Architecture
- 8 - 24 encoder & decoder layers
- Hidden Size: 512 - 1024
- 6 - 16 attention heads
Applications
Translation, Summarization, QA, Classification
Key Characteristics
All tasks framed as text-to-text, Relative positional encoding
BART
2019Bidirectional and Auto-Regressive Transformers
Lewis et al. (Facebook AI)
Parameters
140M - 400M
Pre-training
Denoising autoencoder with text corruption
Architecture
- 6 - 12 encoder & decoder layers
- Hidden Size: 768 - 1024
- 12 - 16 attention heads
Applications
Summarization, Translation, QA, Dialog
Key Characteristics
Combines bidirectional encoder with autoregressive decoder
Key Differences
- Encoder-Only Models (BERT, RoBERTa) excel at understanding tasks like classification, NER, and sentiment analysis.
- Decoder-Only Models (GPT family) are designed for text generation and completion tasks.
- Encoder-Decoder Models (T5, BART) shine in sequence-to-sequence tasks like translation and summarization.
- As models scale up in parameters, their capabilities increase, but so do their computational requirements.
- Pre-training objectives significantly influence what tasks a model excels at.
Key Distinctions:
Encoder-Only Models (BERT, RoBERTa, DistilBERT)
- Bidirectional attention across all tokens
- Suitable for understanding tasks: classification, NER, sentiment analysis
- Cannot generate text autoregressively
Decoder-Only Models (GPT, GPT-2, GPT-3, GPT-4)
- Causal (masked) attention to prevent looking ahead
- Excellent for text generation and completion
- Can be adapted for understanding tasks with prompting
Encoder-Decoder Models (T5, BART, Pegasus)
- Best of both worlds: bidirectional encoding + autoregressive decoding
- Excel at sequence-to-sequence tasks: translation, summarization
- More complex but very versatile
Implementation: Building a Simple Transformer
Implementing Self-Attention in PyTorch
pythonimport torch import torch.nn as nn import torch.nn.functional as F import math class SelfAttention(nn.Module): def __init__(self, embed_size, heads): super(SelfAttention, self).__init__() self.embed_size = embed_size self.heads = heads
Implementing Positional Encoding
pythonclass PositionalEncoding(nn.Module): def __init__(self, embed_size, max_len=5000): super(PositionalEncoding, self).__init__() pe = torch.zeros(max_len, embed_size) position = torch.arange(0, max_len, dtype=torch.float).unsqueeze(1) div_term = torch.exp(torch.arange(0, embed_size, 2).float() * (-math.log(10000.0) / embed_size)) # Apply sin to even indices pe[:, 0::2] = torch.sin(position * div_term)
Transformer Encoder Layer
pythonclass TransformerEncoderLayer(nn.Module): def __init__(self, embed_size, heads, dropout, forward_expansion): super(TransformerEncoderLayer, self).__init__() self.attention = SelfAttention(embed_size, heads) self.norm1 = nn.LayerNorm(embed_size) self.norm2 = nn.LayerNorm(embed_size) self.feed_forward = nn.Sequential( nn.Linear(embed_size, forward_expansion * embed_size),
Transformer Decoder Layer
pythonclass TransformerDecoderLayer(nn.Module): def __init__(self, embed_size, heads, dropout, forward_expansion): super(TransformerDecoderLayer, self).__init__() self.attention = SelfAttention(embed_size, heads) self.cross_attention = SelfAttention(embed_size, heads) self.norm1 = nn.LayerNorm(embed_size) self.norm2 = nn.LayerNorm(embed_size) self.norm3 = nn.LayerNorm(embed_size)
Applications: How Transformers Revolutionized NLP
Machine Translation
The original transformer model was designed for machine translation and significantly improved the state of the art on the WMT English-to-German and English-to-French translation tasks.
Qualitative Comparison
The machine translation improvements can be seen in the model comparison tool we used earlier. Switch to the "Model Comparison" mode in any of the TransformerExplorer tools above to compare translation quality across different architectures.
Language Modeling and Text Generation
Transformer-based language models like GPT can generate remarkably coherent and contextually appropriate text.
Code Example: Text Generation with a Simple Transformer
pythonimport torch import torch.nn as nn import torch.nn.functional as F import math # Simple GPT-like model for text generation class SimpleGPT(nn.Module): def __init__(self, vocab_size, embed_dim, num_heads, num_layers, max_seq_len, dropout=0.1): super().__init__()
Bidirectional Understanding and Masked Language Modeling
BERT and its variants use transformer encoders with masked language modeling to develop bidirectional understanding of text.
Code Example: Masked Language Modeling with Transformer Encoder
pythonimport torch import torch.nn as nn import torch.nn.functional as F import random # BERT-like model for masked language modeling class SimpleBERT(nn.Module): def __init__(self, vocab_size, embed_dim, num_heads, num_layers, max_seq_len, dropout=0.1): super().__init__()
Limitations and Future Directions
Current Limitations
- Quadratic Complexity: Self-attention scales poorly with sequence length
- Context Window: Limited by training and architecture constraints
- Interpretability: Understanding attention patterns isn't straightforward
- Data Hunger: Requires massive amounts of data for best performance
- Compute Resources: Training large models requires significant resources
Efficient Transformer Variants
Researchers are developing efficient transformers to address these limitations:
Model | Innovation | Complexity | Performance | Max Context |
---|---|---|---|---|
Vanilla Transformer | Self-attention | O(n²) | Base | 512-1024 |
Longformer | Local + global attention | O(n) | Similar | 4,096 |
Reformer | LSH attention | O(n log n) | Slightly lower | 2,048 |
Linformer | Linear projections | O(n) | Slightly lower | 2,048 |
Performer | FAVOR+ mechanism | O(n) | Similar | 64,000+ |
Transformer-XL | Recurrence mechanism | O(n²) | Better | 8,192 |
Routing Transformer | Clustered attention | O(n√n) | Better | 16,384 |
Note: Performance is relative to vanilla Transformer of similar size.
The Future of Transformers
Transformers continue to evolve in several exciting directions:
- Multimodal Transformers: Processing text, images, audio, and video together
- Domain-Specific Architectures: Specialized for specific fields (science, medicine)
- Mixture of Experts: Using sparse activation to scale to trillions of parameters
- Retrieval-Augmented Models: Enhancing LLMs with external knowledge access
- More Efficient Attention: Continuing to reduce the quadratic complexity
Conclusion: The Foundation of Modern NLP
The transformer architecture represents one of the most significant breakthroughs in natural language processing. By introducing self-attention, positional encoding, and parallel processing, transformers solved the fundamental limitations of sequential models while enabling the creation of increasingly powerful language models.
Key innovations of the transformer:
- Self-attention: Direct modeling of relationships between all sequence positions
- Parallel processing: Elimination of sequential dependencies for faster training
- Scalability: Architecture that grows effectively with more data and compute
- Versatility: Success across numerous NLP tasks and domains
The transformer has become the foundation for modern language models like BERT, GPT, T5, and their successors. Understanding this architecture is essential for working with contemporary NLP systems.
In our next lessons, we'll explore how transformer architectures evolved into the powerful language models of today, including the deterministic and probabilistic methods for text generation, and then survey the modern landscape of language models from BERT and GPT to the latest innovations like Llama 3 and Claude 3.
Practice Exercises
-
Implement Self-Attention:
- Write a simplified version of the self-attention mechanism
- Visualize attention weights for a sample sentence
- Experiment with different scaling factors
-
Positional Encoding Analysis:
- Implement sinusoidal positional encoding
- Analyze how different positions are represented
- Visualize positional encoding vectors
-
Transformer Architecture Comparison:
- Compare performance of RNN vs. Transformer on a simple task
- Measure inference time for both architectures
- Analyze computational complexity at different sequence lengths
-
Pre-trained Model Exploration:
- Fine-tune a small pre-trained transformer for a classification task
- Analyze attention patterns in different heads
- Experiment with different layer freezing strategies
Additional Resources
- Attention Is All You Need - The original transformer paper
- The Illustrated Transformer by Jay Alammar
- Transfomer detailed visualization and explanation
- The Annotated Transformer - Implementation walkthrough
- Transformers from Scratch by Peter Bloem
- Hugging Face Transformers - Library with implementations of transformer models
- A Survey of Long-Term Context in Transformers - Overview of efficient transformer variants