Skip to content

Genexus Server 16 web crawler to obtain data from commits to create an automated tool to obtain list of changes from the development branch to the production branch for version generation.

Notifications You must be signed in to change notification settings

johnnessantos/gxcrawler

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

In CMD run this command lines:

python -m venv env
gxcrawlervenv\Scripts\activate
pip install -r requirements.txt

The database used to store the search results was sqlite. Therefore, it is necessary to create the structure for storage that can be performed by the commands:

python gxcrawler/database.py

Getting Started

Using enviroment variables Windows in CMD execute this commands lines:

set GX_USER=user
set GX_PASSWORD=password
set GX_URL=url
set GX_KBNAME=kbname

Or use file of configuration in .\gxcrawler\.env with the lines with your credentials:

GX_USER=user
GX_PASSWORD=password
GX_URL=url
GX_KBNAME=kbname

Attention: Be careful to make this available in code repositories like gitlab or github, add the .env file in the equivalent of .gitignore

With python installed, run this command for your API to run, the result will be in the project directory with the name of database.db being from the sqlite database. (Virtual environments recommended):

import datetime

from process import Process

process = Process()
date = datetime.datetime(2020, 11, 1, 0, 0, 0, 0)
while date <= datetime.datetime.today():
    process.capture_data(date, date)
    date += datetime.timedelta(days=1)

About

Genexus Server 16 web crawler to obtain data from commits to create an automated tool to obtain list of changes from the development branch to the production branch for version generation.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages