Skip to content

RkCode2025/Perceptron-implementation-using-scikit-learn-and-Numpy

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

5 Commits
 
 
 
 

Repository files navigation

Perceptron-implementation-using-scikit-learn-and-Numpy

  1. Using Scikit-learn

Scikit-learn already provides a built-in Perceptron model under sklearn.linear_model.Perceptron. It is an efficient implementation of the original Rosenblatt’s perceptron algorithm with some modern optimizations (like learning rate, regularization, and tolerance for convergence). You can use it just like logistic regression or SVM from sklearn.

👉 Features:

Supports online learning (with partial_fit).

Works with dense and sparse feature matrices.

Useful for quick prototyping and baseline models.

  1. Using NumPy (from scratch)

Building a perceptron manually with NumPy helps you understand the math and learning process:

Initialize weights (random or zeros).

Compute weighted sum (z = w·x + b).

Apply step function (activation) to decide output (0/1).

Update weights using the perceptron learning rule:

Iterate over dataset for multiple epochs until convergence.

This version is slower than Scikit-learn’s but gives a deep understanding of how perceptrons actually learn to classify linearly separable data.

So: Use Scikit-learn if you want speed, efficiency, and integration into ML pipelines. Use NumPy if you want to learn the fundamentals and see how weights evolve step by step.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published