Skip to content

omsirvi/crawl

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

20 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

crawl

Make a Document out of it.

virtualenv : A virtual environment is a tool that helps to keep dependencies required by different projects separate by creating isolated python virtual environments for them. This is one of the most important tools that most of the Python developers use.

pyenv : Python verison switch (2.7 , 3.6 , 3.7) interpreter (switch one version to another version)

Instruction create direcotry :

$ mkdir crawl
crawl $ vi requirements.txt
crawl $ mkdir lib
$ cd lib
lib $ vi settings.yaml
lib $ vi hanlder.py
$ cd ..

crawl $ mkdir ENV

virtualenv ENV
source ENV/bin/activate
pip install request
pip install pyyaml
pip freeze > requirements.txt

pip install -r requirements.txt (pip install request) and (pip install pyyaml) all version and package are install in requirements.txt file after that no need for installation of (pip install request) and (pip install pyyaml) only install requirements.txt

pip list : show the version and package
in website page : go to web developer and network etc.

Request : This means you don’t have to manually add query strings to URLs, or form-encode your POST data. Don’t worry if that made no sense to you. It will in due time. Requests will allow you to send HTTP/1.1 requests using Python. With it, you can add content like headers, form data, multipart files, and parameters via simple Python libraries. It also allows you to access the response data of Python in the same way

So, to request a response from the server, there are mainly two methods:
GET : to request data from the server.
Range : Limited (1024 charatacter )
URL change every time when you changed in website
POST : to submit data to be processed to the server.
more secure version
Range : unlimited
URL same and page changed value

Beautifulshop using scrapping
import requests import requests
from bs4 import BeautifulSoup
##https://nclt.gov.in/pdf-cause-list?field_bench_target_id=5366&field_bench_court_target_id_entityreference_filter=5386

##WebSite Get Requests
#html = requests.get('https://nclt.gov.in/exposed-pdf-cause-list-page', verify=False).content
""" FILE WRITE
file_write = open('home.html', 'wb')
file_write.write(html)
file_write.close()
"""
##File Read Operation
content = open('home.html', 'r').read()
##File Parser
soup = BeautifulSoup(content, 'html.parser')
##Class of both
#print(type(soup))
#print(type(content))

soup = soup.find('select', {"name":"field_bench_target_id"})
for record in soup.find_all('option'):
if not 'Any' in str(record):
branch_name = record.text.strip()
branch_value = record.get('value')
print(branch_name)
print(branch_value)
#print(record)
#print()
print('')
print('**'*20)

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages