Skip to content

winston1214/Smart_ATM

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

62 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Smart_ATM

We won the grand prize at the AI hub competition!🥇

Description

We propose a Smart ATM model that can prevent voice phishing face-to-face defraudation damage in ATMs.

  • First, we calculate the risk through personal withdrawal details, loans received from banks, insurance loans, and card loan information.

  • Second, based on the yolov5 algorithm, masks, hands, and faces are detected to determine whether to wear a mask or not, and whether to make a call is determined.

  • Third, based on Efficienet-b4, facial expressions are recognized to determine whether they are embarrassed or anxious.

  • Finally, the risk calculated for each is added to determine whether there is voice phishing.

More detailed description can be found through this link

Our short paper : link

Member(Pishing Hunter)


김영민

곽윤경

여지민

양동재

Environment

  • Ubuntu 18.04.5LTS, Tesla V100-SXM2 32GB

Dataset

Model

We use YOLOv5 + EfficientNet-b4. Yolov5 is a real-time object detection model that can quickly detect objects. EfficientNet-b4 is a classification model with high accuracy with a small number of parameters.

Our Model Flow chart

How to do?(Demo)

$ git clone https://github.com/winston1214/Smart_ATM.git && cd Smart_ATM
$ pip install -r requirements.txt
$ python detect.py --source ${VIDEO_PATH} --weights weights/detection_best.pt --facial-weights-file weights/facial_best.pt --id ${user number}

Training

Object Detection Training

This is the same as yolov5 method.

Facial Recognition

  1. Setting dataset(Image) → Crop the face part of a person's image.
  2. Seeting dataset(Label) → Make a csv file with two columns(image name, label). ※ I set [normal : 0, danger : 1]
  3. change directory $ cd facial_recognition
  4. run
$ python facial_train.py --root ${image root} --csv ${label csv} --batch {Batch-size} --epochs {Number of epoch} --lr {learning rate}
  • Train loss

  • Validation Accuracy

Output

Public._.mp4

if you don't watch a output video, you should click here to watch output video

About

NIA Idea Challenge - Pishing Hunter

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages