Skip to content

Commit 92d1043

Browse files
Merge pull request #1 from avinashkranjan/master
Updating the repository
2 parents e41b2c3 + 9847298 commit 92d1043

File tree

28 files changed

+2153
-2
lines changed

28 files changed

+2153
-2
lines changed

Air pollution prediction/CodeAP.py

Lines changed: 48 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,48 @@
1+
import requests
2+
import matplotlib.pyplot as plt
3+
city = input("Enter you city : ")
4+
url = 'http://api.waqi.info/feed/' + city + '/?token='
5+
api_key = input("Enter your API key: ")
6+
7+
main_url = url + api_key
8+
r = requests.get(main_url)
9+
data = r.json()['data']
10+
aqi = data['aqi']
11+
iaqi = data['iaqi']
12+
13+
14+
15+
for i in iaqi.items():
16+
print(i[0],':',i[1]['v'])
17+
dew = iaqi.get('dew','Nil')
18+
no2 = iaqi.get('no2','Nil')
19+
o3 = iaqi.get('o3','Nil')
20+
so2 = iaqi.get('so2','Nil')
21+
pm10 = iaqi.get('pm10','Nil')
22+
pm25 = iaqi.get('pm25','Nil')
23+
24+
print(f'{city} AQI :',aqi,'\n')
25+
print('Individual Air quality')
26+
print('Dew :',dew)
27+
print('no2 :',no2)
28+
print('Ozone :',o3)
29+
print('sulphur :',so2)
30+
print('pm10 :',so2)
31+
print('pm25 :',pm25)
32+
pollutants = [i for i in iaqi]
33+
values = [i['v'] for i in iaqi.values()]
34+
35+
36+
# Exploding the first slice
37+
explode = [0 for i in pollutants]
38+
mx = values.index(max(values)) # explode 1st slice
39+
explode[mx] = 0.1
40+
41+
# Plot a pie chart
42+
plt.figure(figsize=(8,6))
43+
plt.pie(values, labels=pollutants,explode=explode,autopct='%1.1f%%', shadow=True)
44+
45+
plt.title('Air pollutants and their probable amount in atmosphere [kanpur]')
46+
47+
plt.axis('equal')
48+
plt.show()

Air pollution prediction/README.md

Lines changed: 26 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,26 @@
1+
# Air pollution Prediction
2+
_______________________________________________________________________
3+
4+
## What is it?
5+
6+
This helps you find the levels of Air pollution plotted graphically with a provided dataset.
7+
8+
## TechStack
9+
10+
1. Module used:
11+
12+
- matplotlib
13+
14+
## How to use?
15+
16+
You can clone the repository directly from
17+
18+
```https://github.com/Pranjal-2001/Amazing-Python-Scripts.git```
19+
20+
## To get your API
21+
22+
23+
[Click here to get your API](https://waqi.info/)
24+
25+
26+
[Click here to get your API key](https://aqicn.org/data-platform/token/#/)

CONTRIBUTING.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -47,7 +47,7 @@ $ git add .
4747
- To commit give a descriptive message for the convenience of reveiwer by:
4848
```
4949
# This message get associated with all files you have changed
50-
$ git commit -m 'message
50+
$ git commit -m "message"
5151
```
5252
- **NOTE**: A PR should have only one commit. Multiple commits should be squashed.
5353
## Step 6 : Work Remotely

Num-Plate-Detector/README.md

Lines changed: 69 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,69 @@
1+
# Number plate detection
2+
3+
## Methodology:
4+
5+
License plate of the vehicle is detected using various features of image processing library openCV and recognizing the text on the license plate using python tool named as tesseract.
6+
7+
To recognize the license plate we are using the fact that license plate of any vehicle has rectangular shape. So, after all the processing of an image we will find the contour having four points inside the list and consider it as the license plate of the vehicle.
8+
9+
</br>
10+
11+
## Dependencies
12+
- OpenCV
13+
- Numpy and Pandas
14+
- [Tesseract 4](https://github.com/tesseract-ocr/tesseract/wiki)
15+
16+
</br>
17+
18+
## Preprocessing
19+
20+
Grayscale images are much easier to work within a variety of tasks like In many morphological operations and image segmentation problems, it is easier to work with the single-layered image (Grayscale image)than a three-layered image (RGB color image ).
21+
22+
</br>
23+
24+
After gray scaling we will blur the gray image to reduce the background noise. I had used the bilateral filter to blur the image because it actually preserves all strength, it removes noise quite well and strengthens the edges in the image when we deal with a single-layered image.
25+
26+
</br>
27+
28+
After blurring we will do edge detection. Canny Edge Detection follows the series of steps and is a very powerful edge detection method. We will use the Canny edge detection to extract the edges from the blurred image because of its optimal result, well-defined edges, and accurate detection.
29+
30+
</br>
31+
32+
After finding edges we will find the contours from and edged image. There are mainly two important types of contours Retrieval Modes. The first one is the `cv2.RETER_LIST` which retrieves all the contours from an image and second is `cv2.RETER_EXTERNAL` which retrieves external or outer contours from an image. There are two types of Approximation Methods. The first method is the `cv2.CHAIN_APPROX_NONE` stores all the boundary points. But we don't necessarily need all bounding points. If the points form a straight line, we only need the start and ending points of that line. The second method is the `cv2.CHAIN_APPROX_SIMPLE` instead only provides these start and endpoints of bounding contours, thus resulting in much more efficient storage of contour information.
33+
34+
</br>
35+
36+
After finding contours we will sort the contours. Sorting contours is quite useful when doing image processing. We will sort contours by area which will help us to eliminate some small and useless contours made by noise and extract the large contours which contain the number plate of a vehicle.
37+
38+
</br>
39+
40+
## Detecting Plate
41+
42+
After we sort the contours we will now take a variable plate and store a value none in the variable recognizing that we did not find number plate till now. Now we iterate through all the contours we get after sorting from the largest to the smallest having our number plate in there so we should be able to segment it out. Now to that, we will look through all the contours and going to calculate the perimeter for the each contour. Then we will use `cv2.approxPolyDP()` function to count the number of sides.
43+
44+
</br>
45+
46+
The `cv2.approxPolyDP()` takes three parameters. First one is the individual contour which we wish to approximate. Second parameter is Approximation Accuracy Important parameter is determining the accuracy of the approximation. Small values give precise- approximations, large values give more generic approximation. A good rule of thumb is less than 5% of the contour perimeter. Third parameter is a Boolean value that states whether the approximate contour should be open or closed.
47+
48+
</br>
49+
50+
I had used contour approximation and it approximate a contour shape to another shape with less number of that is dependent on the position I specify so the 0.02 is the precision that worked.
51+
52+
</br>
53+
54+
## Usage
55+
56+
- `data.csv` contains the date, time and vehicle license number.
57+
- Tesseract is a library which uses OCR engine.
58+
- Tesseract contain more than hundred languages to choose, if you want to change:
59+
- Open `number_plate.py` and change the `config` variable.
60+
- In `config` variable change **'eng'** to your preferred language.
61+
- Use Tesseract 4 as this is the latest version which uses LSTM network.
62+
63+
</br>
64+
65+
Run the following:
66+
67+
python number_plate.py
68+
69+

Num-Plate-Detector/car.jpeg

7.74 KB
Loading

Num-Plate-Detector/number_plate.py

Lines changed: 84 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,84 @@
1+
# importing libraries
2+
import numpy as np
3+
import cv2
4+
import imutils
5+
import sys
6+
import pytesseract
7+
import pandas as pd
8+
import time
9+
10+
image = cv2.imread(input("Enter the path of image: "))
11+
12+
image = imutils.resize(image, width=500)
13+
14+
pytesseract.pytesseract.tesseract_cmd = input("Enter the path of tesseract in your local system : ")
15+
16+
# displaying it
17+
cv2.imshow("Original Image", image)
18+
19+
# converting it into gtray scale
20+
# cv2.imshow("1 - Grayscale Conversion", gray)
21+
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
22+
23+
#cv2.imshow("2 - Bilateral Filter", gray)
24+
gray = cv2.bilateralFilter(gray, 11, 17, 17)
25+
26+
# canny edge detector
27+
#cv2.imshow("4 - Canny Edges", edged)
28+
edged = cv2.Canny(gray, 170, 200)
29+
30+
"""
31+
there are three arguments in cv2.findContours() function, first one is source image,
32+
second is contour retrieval mode, third is contour approximation method.py
33+
34+
If you pass cv2.CHAIN_APPROX_NONE, all the boundary points are stored.
35+
But actually do we need all the points? For eg, you found the contour of a straight line.
36+
Do you need all the points on the line to represent that line?
37+
No, we need just two end points of that line.
38+
This is what cv2.CHAIN_APPROX_SIMPLE does.
39+
It removes all redundant points and compresses the contour, thereby saving memory.
40+
"""
41+
42+
cnts, _ = cv2.findContours(edged.copy(), cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE)
43+
44+
# contour area is given by the function cv2.contourArea()
45+
cnts=sorted(cnts, key = cv2.contourArea, reverse = True)[:30]
46+
NumberPlateCnt = None
47+
48+
count = 0
49+
for c in cnts:
50+
"""
51+
contour perimeter is also called arclength. It can be found out using cv2.arcLength()
52+
function.Second argument specify whether shape is a closed contour( if passed
53+
True), or just a curve.
54+
"""
55+
peri = cv2.arcLength(c, True)
56+
approx = cv2.approxPolyDP(c, 0.02 * peri, True)
57+
if len(approx) == 4:
58+
NumberPlateCnt = approx
59+
break
60+
61+
# Masking the part other than the number plate
62+
mask = np.zeros(gray.shape,np.uint8)
63+
new_image = cv2.drawContours(mask,[NumberPlateCnt],0,255,-1)
64+
new_image = cv2.bitwise_and(image,image,mask=mask)
65+
cv2.namedWindow("Final_image",cv2.WINDOW_NORMAL)
66+
cv2.imshow("Final_image",new_image)
67+
68+
# Configuration for tesseract
69+
config = ('-l eng --oem 1 --psm 3')
70+
71+
# Run tesseract OCR on image
72+
text = pytesseract.image_to_string(new_image, config=config)
73+
74+
#Data is stored in CSV file
75+
raw_data = {'date': [time.asctime( time.localtime(time.time()) )],
76+
'v_number': [text]}
77+
78+
df = pd.DataFrame(raw_data, columns = ['date', 'v_number'])
79+
df.to_csv('data.csv')
80+
81+
# Print recognized text
82+
print(text)
83+
84+
cv2.waitKey(0)

PDF2Text/Readme.md

Lines changed: 57 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,57 @@
1+
# <b>PDF2Text</b>
2+
3+
[![forthebadge](https://forthebadge.com/images/badges/made-with-python.svg)](https://forthebadge.com)
4+
5+
## PDF2Text Functionalities : 🚀
6+
7+
- Converts PDF file to a text file
8+
9+
## PDF2Text Instructions: 👨🏻‍💻
10+
11+
### Step 1:
12+
13+
Open Termnial 💻
14+
15+
### Step 2:
16+
17+
Locate to the directory where python file is located 📂
18+
19+
### Step 3:
20+
21+
Run the command: python script.py/python3 script.py 🧐
22+
23+
### Step 4:
24+
25+
Sit back and Relax. Let the Script do the Job. ☕
26+
27+
## Requirements
28+
29+
- PyPDF2
30+
31+
## DEMO
32+
33+
1) Select the PDF File
34+
35+
![Screenshot (127)](https://user-images.githubusercontent.com/60662775/112711916-ff837580-8ef1-11eb-998b-1c96fec1de2f.png)
36+
37+
2) Place the PDF File in the script folder
38+
39+
![Screenshot (128)](https://user-images.githubusercontent.com/60662775/112711924-12964580-8ef2-11eb-8aec-ef33fb3d19e1.png)
40+
41+
3) Now open cmd
42+
43+
![Screenshot (129)](https://user-images.githubusercontent.com/60662775/112711947-41142080-8ef2-11eb-80bb-71539b301b4e.png)
44+
45+
4) Enter the input like the PDF File path and number of pages
46+
47+
![Screenshot (131)](https://user-images.githubusercontent.com/60662775/112711986-846e8f00-8ef2-11eb-9cbd-cc6dc204b6b3.png)
48+
49+
5) The PDF File will be converted to text file (OUTPUT)
50+
51+
![Screenshot (132)](https://user-images.githubusercontent.com/60662775/112712000-92bcab00-8ef2-11eb-9191-252d6e6c526d.png)
52+
53+
54+
## Author
55+
56+
Amit Kumar Mishra
57+

PDF2Text/script.py

Lines changed: 13 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,13 @@
1+
import PyPDF2
2+
3+
pdf = input(r"Enter the path of PDF file: ")
4+
n = int(input("Enter number of pages: "))
5+
6+
page = PyPDF2.PdfFileReader(pdf)
7+
for i in range(n):
8+
st=""
9+
st += page.getPage(i).extractText()
10+
11+
with open(f'./PDF2Text/text{i}.txt','w') as f:
12+
f.write(st)
13+

Paint Application/README.md

Lines changed: 24 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,24 @@
1+
# Paint Application
2+
Running this Script would open up a Paint application GUI on which the user can draw using the brush tool. The Brush colour and size can be changed, the paint canvas can be saved to the file system aswell.
3+
4+
## Setup instructions
5+
In order to run this script, you need to have Python and pip installed on your system. After you're done installing Python and pip, run the following command from your terminal to install the requirements from the same folder (directory) of the project.
6+
```
7+
pip install -r requirements.txt
8+
```
9+
10+
After satisfying all the requirements for the project, Open the terminal in the project folder and run
11+
```
12+
python paint.py
13+
```
14+
or
15+
```
16+
python3 paint.py
17+
```
18+
depending upon the python version. Make sure that you are running the command from the same virtual environment in which the required modules are installed.
19+
20+
## Output
21+
![Sample pic of the Paint Application](https://i.postimg.cc/htft5W7N/paint.png)
22+
23+
## Author
24+
[Ayush Jain](https://github.com/Ayushjain2205)

0 commit comments

Comments
 (0)