Python client library for scraping the GmodStore Job Market
gmodstore-py is a Python SDK for scraping job listings from the GmodStore Job Market. Since GmodStore doesn't provide a public API, this library uses Selenium WebDriver to extract job data from the website.
- 🔍 Job Scraping - Fetch job listings from GmodStore Job Market
- 📊 Pagination Support - Automatically fetch jobs from multiple pages
- 🎯 Job Details - Get complete information about specific jobs
- 🤖 Headless Browser - Uses headless Chrome for efficient scraping
- 📝 Type Hints - Full typing support for better IDE integration
- ⚡ Performance Optimized - Disables images and CSS for faster scraping
- 🐳 Docker Ready - Works in containerized environments
pip install gmodstore-pyThis package requires Chrome/Chromium and ChromeDriver:
Ubuntu/Debian:
sudo apt install chromium chromium-drivermacOS:
brew install chromium chromedriverDocker:
RUN apt-get update && apt-get install -y chromium chromium-driverfrom gmodstore import GmodStoreClient
# Initialize client
client = GmodStoreClient()
# Fetch job list (basic info)
jobs = client.get_jobs(limit=10)
for job in jobs:
print(f"{job.title} - {job.url}")
# Get detailed information
job_details = client.get_job_details(jobs[0].id)
print(f"Budget: {job_details.budget}")
print(f"Description: {job_details.description}")from gmodstore import GmodStoreClient
client = GmodStoreClient()
# Get first 20 jobs
jobs = client.get_jobs(limit=20)
# With pagination (fetch from multiple pages)
jobs = client.get_jobs(limit=50, max_pages=3)
for job in jobs:
print(f"[{job.id}] {job.title}")
print(f" URL: {job.url}")# Get detailed information about a specific job
job = client.get_job_details(job_id="12345")
if job:
print(f"Title: {job.title}")
print(f"Budget: {job.budget}")
print(f"Category: {job.category}")
print(f"Description: {job.description}")
print(f"Applications: {job.applications}")client = GmodStoreClient(
headless=True, # Run browser in headless mode
timeout=10, # Page load timeout in seconds
user_agent="MyBot/1.0" # Custom user agent
)with GmodStoreClient() as client:
jobs = client.get_jobs(limit=10)
for job in jobs:
details = client.get_job_details(job.id)
print(f"{details.title}: {details.budget}")
# Driver automatically closedMain client class for interacting with GmodStore Job Market.
Fetch a list of jobs from the job market.
Parameters:
limit(int): Maximum number of jobs to returnmax_pages(int): Maximum number of pages to scrape
Returns: List of Job objects with basic information
Fetch detailed information about a specific job.
Parameters:
job_id(str): The job ID (can include 'gmodstore_' prefix or not)
Returns: Job object with complete information or None
Fetch job details using a direct URL.
Parameters:
job_url(str): Full URL to the job page
Returns: Job object with complete information or None
Job data model.
job.id # str: Unique job ID (with 'gmodstore_' prefix)
job.title # str: Job title
job.url # str: Full URL to job posting
job.source # str: Always 'gmodstore'
job.description # Optional[str]: Job description
job.budget # Optional[str]: Budget information
job.category # Optional[str]: Job category
job.applications # Optional[str]: Number of applications
job.created_at # Optional[datetime]: Creation timestampfrom gmodstore import (
GmodStoreException, # Base exception
ScrapingError, # Scraping failed
DriverError, # Chrome driver error
)# Chrome binary location (optional)
CHROME_BINARY=/usr/bin/chromium
# ChromeDriver location (optional)
CHROMEDRIVER_PATH=/usr/bin/chromedriverThe client automatically optimizes performance by:
- ✅ Disabling images
- ✅ Disabling CSS
- ✅ Using
eagerpage load strategy - ✅ Minimizing wait times
FROM python:3.12-slim
# Install Chrome and ChromeDriver
RUN apt-get update && apt-get install -y \
chromium \
chromium-driver \
&& rm -rf /var/lib/apt/lists/*
# Install package
RUN pip install gmodstore-py
# Your application
COPY . /app
WORKDIR /app
CMD ["python", "app.py"]# Run tests
pytest
# Run with coverage
pytest --cov=gmodstore
# Run specific test
pytest tests/test_client.pyBe respectful of GmodStore's servers:
- Add delays between requests
- Don't scrape too frequently
- Use reasonable page limits
import time
client = GmodStoreClient()
jobs = client.get_jobs(limit=20)
for job in jobs:
details = client.get_job_details(job.id)
time.sleep(1) # Wait 1 second between requests- ⚖️ Web scraping may violate GmodStore's Terms of Service
- 🤝 Use responsibly and ethically
- 💼 Consider contacting GmodStore for official API access
- 🚫 Do not use for commercial purposes without permission
See the examples/ directory for more usage examples:
basic_scraping.py- Simple job fetchingdetailed_scraping.py- Get full job detailsbatch_scraping.py- Fetch multiple pagesmonitoring.py- Job monitoring script
Contributions are welcome! Please feel free to submit a Pull Request.
- Fork the repository
- Create your feature branch (
git checkout -b feature/AmazingFeature) - Commit your changes (
git commit -m 'Add some AmazingFeature') - Push to the branch (
git push origin feature/AmazingFeature) - Open a Pull Request
Sycatle
- GitHub: @Sycatle
This project is licensed under the MIT License - see the LICENSE file for details.
This is an unofficial scraper and is not affiliated with or endorsed by GmodStore. Use at your own risk and in compliance with GmodStore's Terms of Service.
- GmodStore Job Market
- Documentation
- PyPI Package (Coming soon)
- Issue Tracker
Made with ❤️ for the Garry's Mod community