Member-only story

How to Optimize Hyperparameters for Machine Learning Models

With video explanation | Data Series | Episode 12.1

Mazen Ahmed
5 min readSep 18, 2022

This article looks at the various methods used to optimise hyperparameters for a machine learning model. These are:

  1. Manual Search
  2. Grid Search
  3. Random Search
  4. Bayesian Optimization

We also look at

5. Advantages and Disadvantages of Optimization Methods

Before looking into these methods it is important we have a good understanding of what hyperparameters are:

What are Hyperparameters?

A parameter is what the machine learning algorithm learns. For example, we may choose a model of the form:

After training the model using gradient descent:

We obtain θ₀ = 1.5 and θ₁ = 2. θ₀ and θ₁ are parameters learnt from gradient descent and therefore are just regular parameters.

Hyperparameters are the parameters used to control the learning process and structure of the model. In the above example to find the values of θ₀ and θ₁ we are required to set the learning rate α for gradient descent. α here is a hyperparameter.

--

--

Mazen Ahmed
Mazen Ahmed

No responses yet