This project is a React Native fashion application designed to help users upload outfit images, organize wardrobe pieces, and receive personalized outfit recommendations. The app combines image detection, data storage, and machine learning to create a unique user experience.
- Features
- Tech Stack
- Setup
- Architecture
- Image Processing
- Data Storage
- Background Removal
- Outfit Recommendations
- Profile Navigation
- Future Development
- License
- Outfit Detection: Users upload an image of their outfit, and the app automatically detects, categorizes, and tags individual clothing items through image processing and deep scraping. The primary objective is to extract rich JSON data with detailed product information for everyday clothing items. Tagging leverages OpenAI-based processing and scraping to assign attributes such as color, material, and brand, which are stored in a comprehensive metadata structure.
- Background Removal: To enhance item presentation, especially in the Closet view, product images retrieved from Google are processed to remove backgrounds, isolating the items for a cleaner look.
- Wardrobe and Closet Organization: Users can store and organize wardrobe items, creating a virtual closet. The Closet page allows users to experiment with outfit combinations, using saved and liked items to curate looks from existing wardrobe pieces or items saved to their wish list.
- Nested Navigation and Social Features: With nested navigation, users can explore other profiles, view outfits, upload their looks, and engage with community content. The Profile page catalogs outfits, individual items, liked outfits, and liked items, offering a detailed view of each user’s style.
- Outfit Recommendation Engine: Powered by a custom-built recommendation API, the app suggests outfit combinations based on learned compatibility scores between items. This API, which uses a Siamese neural network for compatibility learning, makes recommendations by comparing items’ deep-tag attributes and user preferences. (The API has its own documentation repository for setup and usage, detailed below.)
The Outfit Recommendation API provides the intelligence behind the personalized outfit suggestions. It is a machine learning-powered service that analyzes item compatibility and user preferences to deliver tailored outfit recommendations. Below is an overview of its structure and key components:
- Data Preprocessing: Loads and processes JSON and CSV data, encoding categorical attributes for compatibility learning.
- Siamese Neural Network: Uses a neural network architecture to assess compatibility between items based on user interactions.
- API Service: A Flask API allows external access to the recommendation engine, providing outfit suggestions based on user input.
To retrieve outfit recommendations, users submit item identifiers, and the API returns items with high compatibility scores based on trained embeddings and similarity analysis.
curl -X POST -H "Content-Type: application/json" -d '{"user_id": "u1", "item_ids": ["i1", "i2"]}' http://localhost:5000/recommend
For full setup and usage instructions, please refer to the separate README.md in the recommendation API repository.
- Frontend: React Native (Expo)
- Backend: Supabase (Database and Authentication), Node.js
- APIs: Cloudinary, Lykdat, Serpapi (Google Lens)
- Machine Learning: Custom-trained compatibility recommendation engine using TensorFlow or similar frameworks.
- Node.js and npm installed.
- Expo CLI for running React Native.
- Accounts on Cloudinary, Lykdat, Serpapi, and Supabase with relevant API keys.
- Clone the Repository:
git clone <repo-url> cd <repo-directory> - Install Dependencies:
npm install - Set up Environment Variables:
Create a
.envfile in the root directory.Add your API keys for Cloudinary, Lykdat, Serpapi, and Supabase:
CLOUDINARY_API_KEY=<your-cloudinary-api-key> LYKDAT_API_KEY=<your-lykdat-api-key> SERPAPI_API_KEY=<your-serpapi-api-key> SUPABASE_URL=<your-supabase-url> SUPABASE_API_KEY=<your-supabase-api-key> - Start the Expo App:
expo start
- React Native with Expo: The frontend UI is built using React Native, leveraging Expo for easy deployment and testing.
- Profile Navigation and Routing: Uses React Navigation to handle user profiles, allowing users to view and interact with other profiles.
- Image Processing Interface: Provides an intuitive way for users to upload, view, and manage their outfit images.
- Supabase: A PostgreSQL database handles user authentication and stores wardrobe data, including tags and metadata for each clothing item.
- Cloudinary for Media Storage: All uploaded images are stored in Cloudinary, optimizing image delivery and allowing easy retrieval.
- Image Upload: Users upload an outfit image, which is sent to Cloudinary for initial storage.
- Item Detection: The image URL is passed to Lykdat’s item detection API, which analyzes the image, identifies individual clothing items, and assigns tags.
- Product Recommendations: The tags are then used to query Serpapi’s Google Lens API, retrieving links to similar products.
Each detected item is tagged with attributes such as:
- Category: Type of clothing (e.g., shirt, pants, shoes)
- Brand: Recognized brand, if detectable
- Material Composition: Fabric type (e.g., cotton, polyester)
- Color and Pattern: Dominant colors and patterns
- Fit: Style and fit of the item (e.g., slim, loose)
The tags and metadata are stored in Supabase as JSON objects for each user’s wardrobe item, enabling future retrieval and outfit matching.
All data is organized and stored using Supabase with the following structure:
- Users Table:
id: Primary user identifierusername: Unique username for each useravatar_url: URL to profile picture (stored in Cloudinary)
- Wardrobe Table:
id: Unique identifier for each wardrobe itemuser_id: Foreign key linking to the userimage_url: Cloudinary URL of the wardrobe item imagetags: JSON object containing tags and metadata for each itemcreated_at: Timestamp of item addition
A background removal API is used to create clean images of each item. The workflow is as follows:
- Retrieve Image URL from Cloudinary.
- Send Image to API for Processing: API removes the background and returns the background-less image.
- Display Processed Image: Displayed in the app with a transparent background for a polished look.
The recommendation engine uses compatibility learning to generate outfit suggestions:
- Tag-Based Learning: Based on deep tagging, the model learns compatibility patterns between various items in a user’s wardrobe.
- Compatibility Scoring: Items are matched based on scores calculated through trained machine learning models.
- Machine Learning Model (TensorFlow/PyTorch): The model is trained on JSON data of outfits, breaking down items by tags such as color, material, pattern, and fit to learn optimal combinations.
- Data Preprocessing: JSON data of tagged outfits is cleaned and used to train the model.
- Model Training: Using TensorFlow or a similar library, the model is trained to understand compatibility.
- Deployment in App: The model is deployed to the app, where it suggests combinations based on items in the user’s wardrobe.
The app includes profile navigation allowing users to view other users’ profiles and outfits:
- React Navigation: Handles routing to other user profiles when a username or avatar is selected.
- Profile Information: Displays the user’s public wardrobe, allowing for social engagement and inspiration.
- 3D Visualization: Incorporate 3D rendering and garment generation to allow users to preview outfits on a customizable avatar, enhancing the Closet screen experience.
- Wishlist Integration: Enable users to add items to a wishlist, allowing the recommendation engine to suggest combinations that blend wishlist items with wardrobe pieces.
- Marketplace and Item Tracking: Introduce a buying and selling platform where users can list items in their wardrobe, complete with trend tracking for each item to monitor popularity, wear frequency, and community engagement.
- API Optimization: Enhance the recommendation API to handle a broader range of tags and item combinations, including compatibility between multiple items for a seamless user experience.
This project is licensed under the MIT License. See LICENSE for more information.