Skip to content

parksandrecfan/bignet-phone

Repository files navigation

BIGNet - Phone

This page is the implementation of BIGNet in the phone case study. Car case study implementation can be found here.

Project summary

Identifying and codifying brand-related aesthetic features for product redesign is essential yet challenging, even for humans. This project demonstrates a deep learning, data-driven way to automatically learn brand-related features through SVG-based supervised learning, using brand identification graph neural network (BIGNet), a hierarichal graph neural network.

Our approach conducting the phone study can be summarized in this flow chart:

System requirements

This project runs on Macbook Pro.

More hardware information can be found in Hardware Overview.txt.

For Windows users, you may experience difficulties installing CairoSVG, and the solutions can be found here

The required (Python) packages can be found in requirements.txt.

In addition, potrace has to be downloaded to create dataset (step 1).

Instruction

An easy-to-follow process is to run the jupyter notebooks in order from 0~7. However, we did provide all the staged results to save implementers time. Downloading the GNN-dataset can skip step 1, and downloading the trained model can skip step 2, and they can both be time-consuming to run. More details are described below.

Code structure

All utility functions are in the util directory folder. The individual notebooks call from the utility folders' functions and demonstrate each step in sections.

Dataset

The synthetic SVG dataset is generated by parameter interpolation. The parameters are measured manually in dim.xlsx.

Running 0.phone parameters.ipynb will then initialize the measured parameters from dim.xlsx. You can also just download the iphone_par.pkl and samsung_par.pkl and put in current directory to skip this.

Next, the synthesis code is in "1.create_dataset.ipynb". The results generated will locate at pkl directory. The relationship of parameter names and phone features are shown in the following figure:

iPhone:

Samsung:

Instead of generating synthetic SVG data from "1.create_dataset.ipynb", one can also directly download the SVG dataset from here.

The SVG data are later reprocessed in "1.create_dataset.ipynb" to a BIGNet-friendly format. One can also directly download the pickle format here. Do note that to continue down the pipeline, put the decompressed data in a "pkl" directory.

Training

Training takes 8 formatted pickle files (train/test, data/curve label/brand label/distance matrix) and is performed in "2.training.ipynb". The trained model can also be downloaded here

Evaluation

Dimension reduction of 2D/3D PCA/tSNE is done in "3.dimension reduction.ipynb". The resulted latent space plot should look like the following. Note that the 2D/3D latent vectors can also be downloaded here.

Leave-one-feature-out (LOFO) ablation studies is implemented in "4.ablation study.ipynb". You can find the brand-relevant (true) and irrelevant (false) features in the ablation folders "synthetic_ablation" and "ref_ablation". Below are some examples. The partial (1000 samples) set of visualization results can also be downloaded here.

From the LOFO result, we can summarize the following brand-relevant features.

Parameter extrapolation study (Partial dependence plot)

We performed 3 following feature extrapolation experiments based on our observation from previous table:

* Apple's lens horizontal location -> "5.extrapolation iphone width.ipynb"

* Apple's width and fillet radius -> "6.extrapolation lens1p.ipynb"

* Samsung's gap from screen-frame -> "7.extrapolation samsung scr2pl2edge.ipynb"

For any questions implementing, feel free to email Sean Chen as yuhsuan2@andrew.cmu.edu

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published