In the field of machine learning, algorithms are primarily categorized into two fundamental types: batch learning and online learning. This classification is based on the system’s ability to continuously learn and adapt from a continuous flow of incoming data.
Note: Machine Learning also can be categorized as Supervised, Unsupervised, and Reinforcement Learning based on the amount and type of supervision experienced in the training phase.

So, in this tutorial, you will learn,
- Batch Learning / Offline Learning
- Online Learning
Batch Learning / Offline Learning
According to batch learning, it is unable to learn continuously from data after we train a model. Then the model has to be trained once according to the complete dataset, it may take longer and also require more computer resources.

Then if we get some new data, how can we add it to this model? We should train the model again from scratch using the whole data (old data + new data).

So, then we need again more time and computer resources. We can solve this problem using algorithms that are capable of learning continuously. This is called online learning.
Online Learning
Here the model can continue to learn from the data.

That way we can continue to learn from the model so we can feed the dataset into small groups also known as mini batches without having to train the model all at once from the complete dataset. Or we can train the model using individual data points from the whole dataset.

The advantage is that we can train the model in less time and with fewer computer resources.
And we have to use one important parameter here. That is the learning rate. It controls how much the model learns from new data. A high learning rate means the system will adapt to new data quickly and at the same time forget the old data quickly as well. A low learning rate means it will learn slowly and be less sensitive to noise in the new data.
Also, one of the problems with this online learning is that we have to continuously monitor the data coming into the model. Because new data coming to the model can be bad data. If they are fed, the performance of the model may decrease. Then we have to pay attention to that.