Link
NLP with Deep Learning
Menu
Home
Orientation
From Author
Lectures Introduction
Prepare Exercise
Environmental Setup
Install Anaconda
Install VSCode
Wrap-up
Introduction to Deep Learning
What is Deep Learning
What is Good AI
ML Project Workflow
Basic Math
Wrap-up
PyTorch Tutorials
Why PyTorch
Install PyTorch
Tensor
Basic Operations
Shaping Operations
Slicing Operations
Other Operations
Wrap-up
Linear Layer
Matric Multiplications
Matric Multiplication Exercise
What is Linear Layer
Linear Layer Exercise
How to use GPU
Wrap-up
Loss Function
Mean Square Errors
Mean Square Error Exercise
Wrap-up
Gradient Descent
Introduction to Calculus
Partial Derivative
What is Gradient Descent
Learning rate
Gradient Descent Exercise
PyTorch AutoGrad
Wrap-up
Linear Regression
What is Linear Regression
Linear Regression Equations
Linear Regression Exercise
Wrap-up
Logistic Regression
Activation Functions
What is Logistic Regression
BCE Loss
Logistic Regression Equations
Logistic Regression Exercise
Wrap-up
Deep Neural Networks I
What is Deep Neural Network
Back Propagation
Back Propagation Equations
Gradient Vanishing Problem
ReLU
Deep Regression Exercise
Wrap-up
Stochastic Gradient Descent
What is SGD
SGD Intuition
Batch Size
SGD Exercise
Wrap-up
Optimizer
What is Hyper-parameter
Tip
Momentum, Adaptive LR and Adam Optimizer
Momentum, Adaptive LR and Adam Optimizer Equation
Adam Optimizer Exercise
Wrap-up
How to Prevent Overfitting
How to Evaluate
Overfitting
Test Dataset
Split Dataset Exercise
Wrap-up
Deep Neural Networks II
Binary Classifications
Evaluation Metrics
Binary Classifications Exercise
Classifications
Softmax and Cross Entropy
Confusion Matrix
Classification Exercise
Wrap-up
Regularizations
What is Regularization
Weight Decay
Data Augmentation
Dropout
Batch Normalizations
Regularization Exercise
Wrap-up
Practical Exercises
What we need in practice
Workflow
Exercise Briefing
분류기 모델 구현하기
DataLoader 구현하기
Trainer 구현하기
train.py 구현하기
predict.ipynb 구현하기
Wrap-up
Representation Learning
What is Feature
One-hot Encodings
Dimension Reduction
Autoencoders
Wrap-up
Probabilistic Perspective
What is different
Basic Stats
Maximum Likelihood Estimations
MLE and DNNs
MLE Equations
MSE Loss
Wrap-up
Convolutional Neural Networks
Introduction to CNN
Max-pooling and Stride
Tips on using CNNs
CNN Exercise
Wrap-up
Recurrent Neural Networks
Introduction to Recurrent Neural Networks
RNN Step by Step
Applications of RNN
LSTM
Gradient Clipping
LSTM Exercise
Wrap-up
Introduction to NLP
What is NLP
NLP with Deep Learning
NLP vs Others
Why NLP is Difficult
Why Korean NLP is more difficult
History of Neural NLP
Recent Trend of NLP
Preprocessing
Tokenization Exercise
Characteristic of Tokenization Style
Pipeline
Subword Segmenations
Subword Segmentation Exercise
Detokenizations
Detokenization Exercise
Parallel Corpus Aligning
Parallel Corpus Aligning Exercise
Tips on Preprocessing
Mini-batchify
TorchText
Wrap-up
Crawling
Cleaning
Regular Expressions
Cleaning with RegEx Exercise
Labeling
Tokenizations
Word Embedding
Word2Vec
GloVe
Introduction to Word Embedding
FastText
Equations
Dimension Reduction Perspective
Applications
Word Embedding Exercise
Embedding Layers
Overview on Sentence Embedding
Wrap-up
Word Sense
WordNet
WordNet Exercise
Traditional Methods
Similarity of Words
Traditional Method Exercise
Text Classification
Implement Data Loaders
Text Classification Results
Introduction to Text Classification
Appendix - Using BERT
Appendix - Using FastText
Tips
Wrap-up
Using RNN
Exercise Briefing
RNN Exercise
Using CNN
CNN Exercise
Implement Trainer Exercise
Introduction to NLG
What is NLG
Latent Representation
Context Embedding
Language Modeling
Introduction to Language Modeling
LM Equations
n-gram
Smoothing and Discounting
Interpolation and Back-off
Perplexity
n-gram Summary
Appendix - n-gram Exercise
RNN LM
Perplexity and Cross Entropy
Autoregressive and Teacher Forcing
Wrap-up
Self-supervised Learning
Sequence to Sequence
Introduction to Machine Translation
Introduction to Sequence to Sequence
Applications
Encoder
Decoder
Generator
Attention
Masking
Input Feeding
Teacher Forcing
Exercise Briefing
Implement Encoder
Implement Attention
Implement Decoder
Implement Generator
Integrate Modules
Implement Trainer
Implement Data Loaders
Implement train.py
Wrap-up
Inference for NLG
What is different
Greedy vs Sampling
Penalties
Exercise Briefing
Implement Inference
Implement predict.py
Check the results
Wrap-up
Evaluations
Perplexity vs BLEU
Extrinsic and Intrinsic Evaluations
Tips
Wrap-up
Beam Search
Introduction to Beam Search
Exercise Briefing
Beam Search Exercise
Check the Results
Wrap-up
Transformer
Introduction to Transformer
Multi-head Attention
Encoder
Decoder with Masking
Positional Encoding
Appendix - Layer Normalizations
Exercise Briefing
Multi-head Attnetion Exercise
Encoder Block Exercise
Decoder Block Exercise
Transformer Class Exercise
Positional Encoding Exercise
Implement Inference
Wrap-up
Adv. Topics on NLG
Conditional Sequence to Sequence
Language Modeling Ensemble
Back Translations
Motivations
Introduction to RL
Policy Gradients
Minimum Risk Training
Tips
Exercise Briefing
Implement Reward Function
Implement Equations
Check the results
Wrap-u
Adv. Topics on Machine Translations
Introduction to Dual Learning
Dual Supervised Learning
Exercise Briefing
Implement Language Model
Implement LM Trainer
Implement Dual Trainer
Implement Loss Function
Check the results
Dual Learning for Machine Translations
Appendix - Dual Learning
Dual Unsupervised Learning
Back Translation Review
Wrap-up
Introduction to PLM
Transfer Learning
Self-supervised Learning
Introduction
Downstream Tasks
Previous Methods
Word Embedding Review
ELMo
Pretrained Language Models
Introduction
Language Model Review
Subword Segmentation Review
Autoregressive Models - GPT
Autoencoding Models - BERT
Calibrated BERT - RoBERTa
Encoder-Decoder Models - BART
Huggingface Exercise
Introduction
Exercise Briefing
data_loader
BERT Trainer with Ignite
Entry Point
Huggingface Trainer
Inference
Demonstration
Light PLMs
Introduction
ALBERT
Knowledge Distillation
PLM Compression with Knowledge Distillation
Paradigm Shift with GPT-3
Text-to-Text Framework
GPT-3
Prompt Engineering
Few-shot Learning with Smaller PLM
Automatic Prompt Generation in PET
Search
Ki's Blog
Adv. Topics on NLG
Tips
TIP: 독자들에게 바라는 것