Optimizers, parameters and hyper-parameters

Optimizers, parameters and hyper-parameters

While starting off in machine learning, somethings might be confusing. Like difference between Model parameter and Model Hyperparameter. And all kinds of different optimizers.

Let me demystify them for you...

There are some internal variables to your model, that define how well your model predicts. These parameters are changeable by the model and we as developers do not have a specific role in tuning them. a But the model hyperparameters are variables that are tuned by us developers, mostly we tune them with the hit-and-trial method. These hyperparameters are the variables which will be later changed by our optimizer too.

Speaking of optimizer, there are a ton of optimizers in the world of machine learning. And they might confuse beginners. Let me simplify them for you.

The optimizers work along with loss functions, to tune the parameters and hyper-parameters of the model. The fundamental task of optimizers is to tune the hyper-parameters in such a way that the value of loss is minimum i.e the error is minimum between the prediction and the real value.

There are a ton of optimizers in machine learning, from which you can choose from. But, the one general optimizer which can be applied pretty much in every project is "Adam" optimizer. You can use this optimizer if you're starting with machine learning.