Like the course just released on Hidden Markov Models, Recurrent Neural Networks are all about learning sequences but whereas Markov Models are limited by the Markov idea, Recurrent Neural Networks are not and as a result, they are more communicating, and more powerful than anything we have seen on tasks that we haven’t made progress on in at least 20 years.

So what’s going to be in this online course and how will it create on the previous neural network courses and Hidden Markov Models?

In the first section of this course we are going to add the idea of time to our neural networks.

I’ll let you know the Simple Recurrent Unit, it is also known as the Elman unit.

We are going to revisit the XOR problem, but we’re going to extend it so that it becomes the equality problem – you will know the regular feed forward neural networks that will have trouble solving this problem but recurrent networks will be working because the key is to treat the input as a sequence.

In the next section of the course, we are going to revisit one of the most popular recurrent neural networks’ applications – language modeling.

You noticed when we did study Markov Models that we could do things like generate poetry and it didn’t look bad too. We could even discriminate between 2 different poets just from the sequence of parts of speech tags that they did use.

We are going to extend our language model so that it no longer makes the Markov idea in this course.

Another important application of neural networks for language is word vectors or word embeddings and the most common technique for this is called Word2Vec, but i will let you know how to recurrent neural networks can also be used for creating word vectors.

In the section after, we’ll look at the very important LSTM, or long short-term memory unit, and the more modern and efficient GRU, or gated recurrent unit, which has been proven to get comparable performance.

We will apply these to some more practical problems like learning a language model from Wikipedia data and visualizing the word embeddings that we get as a result.

All of the materials required for this online course can be easily downloaded and installed for FREE. We will do most of our work in Numpy, Matplotlib, and Theano. Moreover, I am always available to answer your questions and help you in your data science journey.

This course focuses on how to build and understand not just how to use. Any person can learn to use an API in 15 minutes after reading some documentation. It is not about remembering things, it’s about observing yourself via experimentation. It will tell you how to visualize what will happen in the model internally. If you want more than just a superficial look at machine learning models, then you are on right course here.

**NOTES:**

Download all the code of the course from my github:/lazyprogrammer/machine_learning_examples

**In the directory**: rnn_class

Make sure you always “git pull” so you can have the latest version!

**HARD PREREQUISITES / KNOWLEDGE YOU ARE ASSUMED TO HAVE:**

- Calculus
- Linear algebra
- Probability (conditional and joint distributions)
- Python coding: if/else, loops, lists, dicts, sets
- Numpy coding: matrix and vector operations, loading a CSV file
- Deep learning: backpropagation, XOR problem
- Can write a neural network in Theano and Tensorflow

**TIPS:**

- Watch it at 2x.
- Take handwritten notes. This will drastically increase your ability to retain the information.
- Write down the equations. If you don’t, I guarantee it will just look like gibberish.
- Ask lots of questions on the discussion board. The more the better!
- Realize that most exercises will take you days or weeks to complete.
- Write code yourself, don’t just sit there and look at my code.

If you need more details about this course, visit here-> Recurrent Neural Networks in Python

Follow us on Twitter and Facebook Group if you find this blog helpful. If you are looking for discounted online courses, visit here-> Get Promo Code for Discount