This project is a full-stack e-commerce web application, a clone of the Nobero website, built using React for the frontend and Django with Django Rest Framework for the backend. It also includes a Scrapy spider to scrape product data from the original Nobero website.
- Product Listing: Browse a wide range of products categorized for easy navigation.
- Product Details: View detailed information about each product, including price, description, and available SKUs (colors and sizes).
- Category-based Filtering: Filter products based on categories clicked on the homepage.
- Responsive Design: A user-friendly interface that adapts to various screen sizes.
Frontend:
- React
- React Router DOM
- Axios
- Tailwind CSS
Backend:
- Django
- Django Rest Framework
- SQLite3 (for development)
Scraping:
- Scrapy
Before you begin, ensure you have the following installed:
- Python 3.8+
- Node.js and npm (or yarn)
- pip (Python package installer)
-
Navigate to the backend directory:
cd nobero_api -
Create and activate a virtual environment:
python -m venv venv # On Windows .\venv\Scripts\activate # On macOS/Linux source venv/bin/activate
-
Apply migrations:
python manage.py makemigrations products python manage.py migrate
-
Load initial data (optional, but recommended after scraping): If you have scraped data into
data/products.json, you can load it into the database:python manage.py load_products data/products.json
(Note: The
load_productscommand is a custom management command. Refer tonobero_api/products/management/commands/load_products.py) -
Run the development server:
python manage.py runserver
The API will be running at
http://127.0.0.1:8000/.
-
Navigate to the frontend directory:
cd frontend -
Install dependencies:
npm install # or yarn install -
Start the development server:
npm start # or yarn startThe React app will be running at
http://localhost:3000/(or another available port).
-
Navigate to the scraper directory:
cd nobero_scraper -
Install Scrapy:
pip install scrapy
-
Run the spider to scrape data:
scrapy crawl nobero_spider -o ../nobero_api/data/products.json
This command will run the
nobero_spiderand save the scraped data intoproducts.jsonwithin thenobero_api/data/directory.
- Start the Django backend: Follow the "Backend Setup" steps and ensure the Django server is running.
- Start the React frontend: Follow the "Frontend Setup" steps and ensure the React development server is running.
- Access the application: Open your web browser and navigate to
http://localhost:3000/. - Browse categories: Click on the images on the homepage to filter products by category.
- View product details: Click on a product to see its detailed information.
Report bugs and issues or propose improvements through our GitHub repository.
-
Firstly Star(β) the Repository
-
Fork the Repository and create a new branch for any updates/changes/issue you are working on.
-
Start Coding and do changes.
-
Commit your changes
-
Create a Pull Request which will be reviewed and suggestions would be added to improve it.
-
Add Screenshots and updated website links to help us understand what changes is all about.
-
Check the CONTRIBUTING.md for detailed steps...
Enhance Attiria with your contributions! Whether you have innovative suggestions, spot bugs, or wish to introduce new features. Contributions of any kind from anyone are always welcomeπβ
Your insights are invaluable to us. Reach out to us team for any inquiries, feedback, or concerns.
This project is open-source and available under the MIT License.
Developed by Eccentric Explorer
Feel free to reach out with any questions or feedback!

