Sequence Modeling | Deep Learning Specialization | Coursera

Course planning Week 1: Recurrent neural networks Learn about recurrent neural networks. This type of model has been proven to perform extremely well on temporal data. It has several variants including LSTMs, GRUs and Bidirectional RNNs, which you are going to learn about in this section. Recurrent neural networks C4W1L01 Why sequence models C4W1L02 Notation […]

Neural Networks and Deep Learning | Deep Learning Specialization | Coursera

Lecture Planning Week 1: Introduction to Deep Learning Welcome to the Deep Learning Specialization C1W1L01 Welcome Introduction to Deep Learning C1W1L02 Welcome C1W1L03 What is a neural network? C1W1L04 Supervised Learning with Neural Networks C1W1L05 Why is Deep Learning taking off? C1W1L06 About this Course C1W1R1 Frequently Asked Questions C1W1L07 Course Resources C1W1R2 How to use […]

Curriculum Learning | Bengio et al. | ICML 2009 | 2009

Brief information Authors: Yoshua Bengio, Jérôme Louradour, Ronan Collobert, Jason Weston Published year: 2009 Publication: ICML 2009 Abstract Humans and animals learn much better when the examples are not randomly presented but organized in a meaningful order which illustrates gradually more concepts, and gradually more complex ones. We formalize such training strategies in the context of […]

One-Shot Imitation Learning | Yan Duan et al. | 2017

Summary Abstract Ideally, robots should be able to learn from very few demonstrations of any given task, and instantly generalize to new situations of the same task, without requiring task-specific engineering. In this paper, we propose a meta-learning framework for achieving such capability, which we call one-shot imitation learning. Task examples: to stack all blocks […]

Conditional Generative Adversarial Nets | M. Mirza, S. Osindero | 2014

Introduction Conditional version of Generative Adversarial Nets (GAN) where both generator and discriminator are conditioned on some data y (class label or data from some other modality). Architecture Feed y into both the generator and discriminator as additional input layers such that y and input are combined in a joint hidden representation.

Studying Generative Adversarial Networks (GANs)

References Lecture 13: Generative Models. CS231n: Convolutional Neural Networks for Visual Recognition. Spring 2017. [SLIDE][VIDEO] Generative Adversarial Nets. Goodfellow et al.. NIPS 2014. 2014. [LINK][arXiv] How to Train a GAN? Tips and tricks to make GANs work. Soumith Chintala. github. [LINK] The GAN Zoo. Avinash Hindupur. github. [LINK]

Lecture 2: Markov Decision Processes | Reinforcement Learning | David Silver | Course

1. Markov Process / Markov chain 1.1. Markov process A Markov process or Markov chain is a tuple such that is a finite set of states, and is a transition probability matrix. In a  Markov process, the initial state should be given. How do we choose the initial state is not a role of the Markov process. 1.2. State […]

Reinforcement Learning | David Silver | Course

Brief information Instructor: David Silver Course homepage: [LINK] Video lecture list: [LINK] Lecture schedule Lecture 1: Introduction to Reinforcement Learning Lecture 2: Markov Decision Processes Lecture 3: Planning by Dynamic Programming Lecture 4: Model-Free Prediction Lecture 5: Model-Free Control Lecture 6: Value Function Approximation Lecture 7: Policy Gradient Methods Lecture 8: Integrating Learning and Planning […]

Batch Normalization | Summary

References Sergey Ioffe, Christian Szegedy (2015). Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift. ICML 2015. [ICML][arXiv] Lecture 6: Training Neural Networks, Part 1. CS231n:Convolutional Neural Networks for Visual Recognition. 48:52~1:04:39 [YouTube] Choung young jae (2017. 7. 2.). PR-021: Batch Normalization. Youtube. [YouTube] tf.nn.batch_normalization. Tensorflow. [LINK] Rui Shu (27 DEC 2016). A GENTLE […]