Improving Deep Neural Networks: Hyperparameter tuning, Regularization and Optimization | Deep Learning Specialization | Coursera

Brief information Course name: Improving Deep Neural Networks: Hyperparameter tuning, Regularization and Optimization Instructor: Andrew Ng Institution:?deeplearning.ai Media: Coursera Specialization: Deep Learning Duration: 3 weeks About this Course This course will teach you the “magic” of getting deep learning to work well. Rather than the deep learning process being a black box, you will understand […]

One-Shot Imitation Learning | Yan Duan et al. | 2017

Summary Abstract Ideally, robots should be able to learn from very few demonstrations of any given task, and instantly generalize to new situations of the same task, without requiring task-specific engineering. In this paper, we propose a meta-learning framework for achieving such capability, which we call one-shot imitation learning. Task examples: to stack all blocks […]

Structuring Machine Learning Projects | Deep Learning Specialization | Coursera

Brief information Course name: Structuring Machine Learning Projects Instructor: Andrew Ng Institution:?deeplearning.ai Media: Coursera Specialization: Deep Learning Duration: 2 weeks About this Course You will learn how to build a successful machine learning project. If you aspire to be a technical leader in AI, and know how to set direction for your team’s work, this […]

Conditional Generative Adversarial Nets | M. Mirza, S. Osindero | 2014

Introduction Conditional version of Generative Adversarial Nets (GAN) where both generator and discriminator are conditioned on some data y (class label or data from some other modality). Architecture Feed y into both the generator and discriminator as additional input layers such that y and input are combined in a joint hidden representation.

Studying Generative Adversarial Networks (GANs)

References Lecture 13: Generative Models. CS231n: Convolutional Neural Networks for Visual Recognition. Spring 2017. [SLIDE][VIDEO] Generative Adversarial Nets.?Goodfellow et al.. NIPS 2014. 2014. [LINK][arXiv] How to Train a GAN? Tips and tricks to make GANs work. Soumith Chintala. github. [LINK] The GAN Zoo.?Avinash Hindupur. github. [LINK]

Lecture 2: Markov Decision Processes | Reinforcement Learning | David Silver | Course

1. Markov Process / Markov chain 1.1. Markov process A?Markov process?or?Markov chain?is a tuple $\langle S,P \rangle$ such that $S$ is a finite set of states, and $P$ is a transition probability matrix. In a? Markov process, the initial state should be given. How do we choose the initial state is not a role of […]

Reinforcement Learning | David Silver | Course

Brief information Instructor: David Silver Course homepage: [LINK] Video lecture list: [LINK] Lecture schedule Lecture 1: Introduction to Reinforcement Learning Lecture 2: Markov Decision Processes Lecture 3: Planning by Dynamic Programming Lecture 4: Model-Free Prediction Lecture 5: Model-Free Control Lecture 6: Value Function Approximation Lecture 7: Policy Gradient Methods Lecture 8: Integrating Learning and Planning […]

Inception Module | Summary

References Udacity (2016. 6. 6.). Inception Module. YouTube. [LINK] Udacity (2016. 6. 6.). 1×1 Convolutions. YouTube. [LINK] Tommy Mulc (2016. 9.?25.). Inception modules: explained and implemented. [LINK] Szegedy et al. (2015). Going Deeper with Convolutions. CVPR 2015. [arXiv] Summary History The inception module was first introduced in?GoogLeNet for?ILSVRC’14 competition. Key concept Let?a convolutional network decide […]

Batch Normalization | Summary

References Sergey Ioffe, Christian Szegedy (2015). Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift.?ICML 2015. [ICML][arXiv] Lecture 6: Training Neural Networks, Part 1. CS231n:Convolutional Neural Networks for Visual Recognition. 48:52~1:04:39 [YouTube] Choung young jae (2017. 7. 2.). PR-021: Batch Normalization. Youtube. [YouTube] tf.nn.batch_normalization. Tensorflow. [LINK] Rui Shu (27 DEC 2016). A GENTLE […]