Skip to content

AustenSchunk/Fast_Style_Transfer

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

27 Commits
 
 
 
 
 
 

Repository files navigation

Fast Style Transfer with Sparse Convolutions

This is a part of the final project for Computer Vision at Georgia Tech. Link to Group Project Webpage

-- Project Status: Completed, but may add more features in the future

Project Intro/Objective

The purpose of this project was to decrease model parameters in order to reduce space and runtime complexities when computing style transfer.

Main Frameworks Used

  • Tensorflow
  • Numpy

Project Description

The general idea behind using model compression in this context is to reduce the number of parameters in order to conserve space and reduce inference speed. There are two main approaches to perform this task, which are using a small-dense network or a large-sparse network. Using the results from [2], which say that a large-sparse network with the same number of parameters will produce higher accuracy in classification, we decided to go with the approach of introducing sparsity into the network proposed in [1]. In order to acheieve sparsity, we used tensorflow's built-in pruning library, which uses the idea of threshold pruning presented in [2]. Unfortunately, tensorflow does not include a sparse convolution operator, so we used the library in [3] that references the techniques presented in [4].

Experiments and Results

Full report on experiments and results can be found here: Project Report

References

About

Style Transfer using sparse convolutions to reduce space and time complexity

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages