Skip to content

wongtaylor/comp380-perceptron

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

7 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

comp380-perceptron

Description of Implementation

The Perceptron Algorithm consists of 2 main parts: learning/training and testing/deploying. The program prompts the user to enter the name of the network training set and handles pattern classification problems. After training, the program conducts experiments by deploying it to multiple dimension-matched data sets. First, the program will set the weights to either zero or a random value from -0.5 to 0.5 based on user specification through initWeights() method and writes it to a file using writeWeights() method. The program then gets the number of epochs, the learning rate alpha, threshold theta, and the threshold for measuring weight changes from the user. After training has converged, the program will begin testing. Again, the user will specify the deploying data file name and a filename to save the results of classification. The program prints the number of samples that were successfully classified and an overall accuracy for the testing set. It also prints the actual and output classification of each sample input into a results file specified by the user. The learning algorithm essentially iterates through the different training inputs and adjusts the weights until the weights times their corresponding inputs summed together gives the expected output. If one run through does not adjust the weights enough to achieve that, it iterates over all the training data again until they are adjusted (ie epochs). In other words, when the weights are changed less than a threshold entered by the user, the model is considered trained. Also, the activation threshold used is defined by the user as theta, and the rate at which the weights adjusts (learning rate) is also given. When deployed, it does the summing to get outputs from weight times input, and if correct learning rates and theta for the activation function is chosen, will output expected outputs, and even have some degree of tolerance for noisy data.

Report on Experiments

We created three testing sets for our experiments: low noise-interference (L1), medium noise-interference (M1), and high noise-interference (H1) input patterns. The medium noise-interference file has an additional three incorrect pixels for each letter in the low noise-interference file and the high noise-interference file has an additional 6 wrong pixels. Instead of testing cases individually, we created a tester file that tried all different permutations of the different user-input variables. It then printed out for us every permutation of these variables, over 800 variations. (See Appendix A to view a sample of 100 cases of this printout). We created a file to run the program and learn what variables affect the classification accuracy call Tester.java. We wanted to analyze how much each factor influenced the number of epochs to converge as well as the number of samples classified correctly. Our goal was to optimize each variable so the Perceptron program is as effective as possible in pattern classification.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages