Descriptions
Data Science: Modern Deep Learning in Python. This course picks up where my first course, Deep Learning in Python, left off. You already know how to build an artificial neural network in Python, and you have a ready-to-use script that you can use for TensorFlow. Neural networks are one of the core principles of machine learning and are always a top contender in Kaggle competitions. If you want to improve your skills with neural networks and deep learning, this course is for you. You’ve already learned about backpropagation, but there are still many unanswered questions. How can you change it to improve your learning rate? In this course, you’ll learn about batch and stochastic gradient descent, two commonly used methods that allow you to train on only a small sample of data at each iteration, significantly speeding up training time. You’ll also learn about momentum, which can help you overcome local minima and prevent you from having to be too conservative with your learning rate.
You’ll also learn about adaptive learning rate techniques such as AdaGrad, RMSprop, and Adam, which can also help speed up your learning. Since you already know the basics of neural networks, we’ll talk about more modern techniques like dropout regularization and batch normalization, which we implement in both TensorFlow and Theano. The course is constantly being updated, and more advanced regularization techniques will be available in the near future. In my last course, I just wanted to tell you a little about TensorFlow. In this course, we’re going to start with the basics so you understand exactly what’s going on: what are TensorFlow variables and expressions and how can you use these building blocks to create a neural network? We will also look at a library that has been around much longer and is very popular for deep learning – Theano. With this library, we’ll also cover the basic building blocks—variables, expressions, and functions—so you can build neural networks in Theano with confidence.
What will you learn
- Apply backpropagation pulse to train neural networks.
- Apply adaptive learning rate procedures such as AdaGrad, RMSprop, and Adam to backpropagation to train neural networks.
- Understanding the basic building blocks of TensorFlow.
- Create a neural network in TensorFlow.
- Write a neural network using Keras
- Write a neural network using PyTorch.
- Understand the difference between full gradient descent, batch gradient descent, and stochastic gradient descent.
- Understand and implement dropout regularization
- Understand and implement batch normalization
- Understand the basic building blocks of Theano
Who is this course for?
- Students and professionals who want to deepen their knowledge in the field of machine learning
- Data scientists who want to learn more about deep learning
- Data scientists who already know about backpropagation and gradient descent and want to improve them using stochastic batch learning, momentum, and adaptive learning rate procedures such as RMSprop.
- For those who don’t already know about backpropagation or softmax, you should first take my previous course, Learning Python Deeply.
Data Science Specialty: Modern Deep Learning in Python
- Publisher: Udemy
- Teacher: Lazy Programmer Inc.
- English language
- Level: All levels
- Number of courses: 90
- Duration: 11 hours 22 minutes.
Data Science Content: Modern Deep Learning in Python
Requirements
- Be able to work with Python, Numpy and Matplotlib.
- If you don’t already know about gradient descent, backpropagation, and softmax, take my previous course, Deep Learning in Python, and then come back to this course.
Images
Sample clip
Installation instructions
Extract the files and watch on your favorite player
Subtitles: English
Quality: 720p
Download links
Password file(s): free download software
file size
2.92 GB