Skip to content

Python implementation and visualisation of Particle Swarm Opitimisation, compared with Gradient Descent. Experiments, observations and conclusions of PSO discussed. Benchmark functions - Rosenbrock, Rastrigin.

Notifications You must be signed in to change notification settings

dhruv2601/ParticleSwarmOptimization

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

8 Commits
 
 
 
 
 
 

Repository files navigation

Particle Swarm Optimization vs Gradient Descent

Results -

  • PSO on Rosenbrock -

    • PSO on Rosenbrock
  • PSO on Rastrigin -

    • PSO on Rastrigin

  • GD on Rosenbrock -

  • GD on Rastrigin -

Experiment 1 -

Running ~50 iterations of PSO and GD independently to generate probability distribution of error against density -

PSO vs GD

Observation 1 -

While PSO is actively able to achieve the global minima or has very low error, Gradient Descent proves to be ineffective on the benchmarks mentioned.

Experiment 2 -

Effect of Error vs Number of particles in PSO -

PSO vs GD

Experiment 3 -

Effect of Inertia parameter('a') for velocity update as stated in State of Art - Linearly decreasing the parameter from 0.9 to 0.4 over the defined iterations. PSO vs GD

Therefore, we observer that while the error reduces with SOTA params, the difference is not really drastic.

Created with -

  1. Dhruv Rathi
  2. Justus Erker
  3. Caio Guirado

About

Python implementation and visualisation of Particle Swarm Opitimisation, compared with Gradient Descent. Experiments, observations and conclusions of PSO discussed. Benchmark functions - Rosenbrock, Rastrigin.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published