Dive into Deep Learning in 1 Day
================================
Last updated:|today|
Information
-----------
- Speaker: `Alex Smola `__
Overview
--------
Did you ever want to find out about deep learning but didn’t have time
to spend months? New to machine learning? Do you want to build image
classifiers, NLP apps, train on many GPUs or even on many machines? If
you’re an engineer or data scientist, this course is for you. This is
about the equivalent of a Coursera course, all packed into one day. The
course consists of four segments of 90 minutes each.
1. Deep Learning Basics
2. Convolutional Neural Networks for computer vision
3. Best practices (GPUs, Parallelization, Fine Tuning, Transfer
Learning)
4. Recurrent Neural Networks for natural language (RNN, LSTM)
Prerequisites
-------------
You should have some basic knowledge of `Linear
Algebra `__,
`Calculus `__,
`Probability `__,
and `Python `__ (here’s `another
book `__ to
learn Python). Moreover, you should have some experience with
`Jupyter `__ notebooks, or with
`SageMaker `__ notebooks. To run things
on (multiple) GPUs you need access to a GPU server, such as the
`P2 `__,
`G3 `__, or
`P3 `__ instances.
Syllabus
--------
- This course relies heavily on the `Dive into Deep
Learning `__ book. There’s a lot more detail in
the book (notebooks, examples, math, applications).
- The crash course will get you started. For more information also see
`other courses and tutorials `__ based on the
book.
- All notebooks below are availabe at
`d2l-ai/1day-notebooks `__,
which contains instructions how to setup the running environments.
=========== =================================================================================
Time Topics
=========== =================================================================================
9:00—10:00 `Part 1: Deep learning basic <#part-1-deep-learning-basic>`__
10:00—11:00 `Part 2: Convolutional neural networks <#part-2-convolutional-neural-networks>`__
11:00—12:00 `Part 3: Performance <#part-3-performance>`__
12:00—1:00 `Part 4: Recurrent neural networks <#part-4-recurrent-neural-networks>`__
=========== =================================================================================
Part 1: Deep Learning Basic
~~~~~~~~~~~~~~~~~~~~~~~~~~~
**Slides**: `[keynote] `__,
`[pdf] `__
**Notebooks**:
1. Data Manipulation with Numpy
`[ipynb] `__
`[slides] `__
2. Automatic Differentiation
`[ipynb] `__
`[slides] `__
3. Linear Regression
`[ipynb] `__
`[slides] `__
4. Image Classification Data (Fashion-MNIST)
`[ipynb] `__
`[slides] `__
5. Softmax Regression
`[ipynb] `__
`[slides] `__
6. Multilayer Perceptrons
`[ipynb] `__
`[slides] `__
Part 2: Convolutional neural networks
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
**Slides**: `[keynote] `__,
`[pdf] `__
**Notebooks**:
1. GPUs
`[ipynb] `__
`[slides] `__
2. Convolutions
`[ipynb] `__
`[slides] `__
3. Pooling
`[ipynb] `__
`[slides] `__
4. Convolutional Neural Networks (LeNet)
`[ipynb] `__
`[slides] `__
5. Deep Convolutional Neural Networks (AlexNet)
`[ipynb] `__
`[slides] `__
6. Inception Networks (GoogLeNet)
`[ipynb] `__
`[slides] `__
7. Residual Networks (ResNet)
`[ipynb] `__
`[slides] `__
Part 3: Performance
~~~~~~~~~~~~~~~~~~~
**Slides**: `[keynote] `__,
`[pdf] `__
**Notebooks**:
1. A Hybrid of Imperative and Symbolic Programming
`[ipynb] `__
`[slides] `__
2. Multi-GPU Computation Implementation from Scratch
`[ipynb] `__
`[slides] `__
3. Concise Implementation of Multi-GPU Computation
`[ipynb] `__
`[slides] `__
4. Fine Tuning
`[ipynb] `__
`[slides] `__
Part 4: Recurrent neural networks
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
**Slides**: `[keynote] `__,
`[pdf] `__
**Notebooks**:
1. Text Preprocessing
`[ipynb] `__
`[slides] `__
2. Concise Implementation of Recurrent Neural Networks
`[ipynb] `__
`[slides] `__
3. Long Short Term Memory (LSTM)
`[ipynb] `__
`[slides] `__