diff --git a/README.md b/README.md index 6f1bf86..8999ea8 100644 --- a/README.md +++ b/README.md @@ -27,18 +27,19 @@ You can use command **pip install -r requirements.txt** to install all packages ## How to use 1. install [**python3.x**](https://www.python.org/) or [**Anaconda**](https://repo.continuum.io/archive/) and add the path into the environment variable (recommand python3.5). -2. **GPU** run environment [**configure**](https://blog.csdn.net/yhaolpz/article/details/71375762?locationNum=14&fps=1) if train the network (optional). +2. **GPU** run environment [**configure**](https://blog.csdn.net/yhaolpz/article/details/71375762?locationNum=14&fps=1) for network training (**optional**). 3. install all dependent packages mentioned above (open **[setup](https://github.com/Charleswyt/tf_audio_steganalysis/tree/master/setup)/requirements.txt** and input "**pip install -r requirements**" into your cmd window). -4. **run** the code as the [**example**](https://github.com/Charleswyt/tf_audio_steganalysis/tree/master/src/config_file) as shows -5. use [**tensorboard**](http://wiki.jikexueyuan.com/project/tensorflow-zh/how_tos/graph_viz.html) to visualize the train process such as the **accuracy** and **loss curve** of train and validation. The command is "**tensorboard --logdir=/path/to/log-directory**". +4. **run** the code as the [**example**](https://github.com/Charleswyt/tf_audio_steganalysis/tree/master/src/config_file) as shows. +5. use [**tensorboard**](http://wiki.jikexueyuan.com/project/tensorflow-zh/how_tos/graph_viz.html) to visualize the training process such as the **accuracy** and **loss curve** of training and validation. The command is "**tensorboard --logdir=/path of log**". 6. If you want to design your own network based on this project, there is an [**instruction**](https://github.com/Charleswyt/tf_audio_steganalysis/tree/master/src/networks/network_design.md) for you. -7. Our sourcecode is coded with [**Pycharm**](https://github.com/Charleswyt/tf_audio_steganalysis/blob/master/setup/pycharm.md), and the hard wrap is setted as **180**. +7. Our sourcecode is coded with [**Pycharm**](https://github.com/Charleswyt/tf_audio_steganalysis/blob/master/setup/pycharm.md), and the **hard wrap** is setted as **180**. If you setting of hard wrap is less than 180, there will be warnings shwon in pycharm. ## File description ID | File | Function :- | :- | :- -1 | src | source code -2 | paper | the PPT and brief introduction of our recent work -3 | setup | a **requirements.txt** in this folder, which is used to install all packages in this system -4 | jupyter | a folder for jupyter debug -5 | data_processing | tools which are used for QMDCT coefficients extraction and dataset build +1 | audio_samples | some audio samples for shown +2 | data_processing | tools which are used for QMDCT coefficients extraction and dataset build +3 | jupyter | a folder for jupyter debug +4 | paper | the paper, PPT, dataset and brief introduction of our recent work +5 | setup | a **requirements.txt** in this folder, which is used to install all packages in this system +6 | src | source code diff --git a/audio_samples/readme.md b/audio_samples/readme.md new file mode 100644 index 0000000..def34cf --- /dev/null +++ b/audio_samples/readme.md @@ -0,0 +1,10 @@ +## Audio samples + +File | Function +:- | :- +cover_128.mp3 | **cover** mp3 audio file with the bitrate of **128 kbps** +cover_320.mp3 | **cover** mp3 audio file with the bitrate of **320 kbps** +HCM_B_128_ER_10.mp3 | **stego** mp3 audio file with the stego algorithm of **HCM**, the bitrate of **128 kbps** and RER of 1.0 +HCM_B_320_ER_10.mp3 | **stego** mp3 audio file with the stego algorithm of **HCM**, the bitrate of **320 kbps** and RER of **1.0** +EECS_B_128_W_2_H_7_ER_10.mp3 | **stego** mp3 audio file with the stego algorithm of **EECS**, the bitrate of **128 kbps**, the width of **2**, the height of **7** and RER of **1.0** +EECS_B_320_W_2_H_7_ER_10.mp3 | **stego** mp3 audio file with the stego algorithm of **EECS**, the bitrate of **320 kbps**, the width of **2**, the height of **7** and RER of **1.0** \ No newline at end of file diff --git a/readme.md b/readme.md index 6f1bf86..8999ea8 100644 --- a/readme.md +++ b/readme.md @@ -27,18 +27,19 @@ You can use command **pip install -r requirements.txt** to install all packages ## How to use 1. install [**python3.x**](https://www.python.org/) or [**Anaconda**](https://repo.continuum.io/archive/) and add the path into the environment variable (recommand python3.5). -2. **GPU** run environment [**configure**](https://blog.csdn.net/yhaolpz/article/details/71375762?locationNum=14&fps=1) if train the network (optional). +2. **GPU** run environment [**configure**](https://blog.csdn.net/yhaolpz/article/details/71375762?locationNum=14&fps=1) for network training (**optional**). 3. install all dependent packages mentioned above (open **[setup](https://github.com/Charleswyt/tf_audio_steganalysis/tree/master/setup)/requirements.txt** and input "**pip install -r requirements**" into your cmd window). -4. **run** the code as the [**example**](https://github.com/Charleswyt/tf_audio_steganalysis/tree/master/src/config_file) as shows -5. use [**tensorboard**](http://wiki.jikexueyuan.com/project/tensorflow-zh/how_tos/graph_viz.html) to visualize the train process such as the **accuracy** and **loss curve** of train and validation. The command is "**tensorboard --logdir=/path/to/log-directory**". +4. **run** the code as the [**example**](https://github.com/Charleswyt/tf_audio_steganalysis/tree/master/src/config_file) as shows. +5. use [**tensorboard**](http://wiki.jikexueyuan.com/project/tensorflow-zh/how_tos/graph_viz.html) to visualize the training process such as the **accuracy** and **loss curve** of training and validation. The command is "**tensorboard --logdir=/path of log**". 6. If you want to design your own network based on this project, there is an [**instruction**](https://github.com/Charleswyt/tf_audio_steganalysis/tree/master/src/networks/network_design.md) for you. -7. Our sourcecode is coded with [**Pycharm**](https://github.com/Charleswyt/tf_audio_steganalysis/blob/master/setup/pycharm.md), and the hard wrap is setted as **180**. +7. Our sourcecode is coded with [**Pycharm**](https://github.com/Charleswyt/tf_audio_steganalysis/blob/master/setup/pycharm.md), and the **hard wrap** is setted as **180**. If you setting of hard wrap is less than 180, there will be warnings shwon in pycharm. ## File description ID | File | Function :- | :- | :- -1 | src | source code -2 | paper | the PPT and brief introduction of our recent work -3 | setup | a **requirements.txt** in this folder, which is used to install all packages in this system -4 | jupyter | a folder for jupyter debug -5 | data_processing | tools which are used for QMDCT coefficients extraction and dataset build +1 | audio_samples | some audio samples for shown +2 | data_processing | tools which are used for QMDCT coefficients extraction and dataset build +3 | jupyter | a folder for jupyter debug +4 | paper | the paper, PPT, dataset and brief introduction of our recent work +5 | setup | a **requirements.txt** in this folder, which is used to install all packages in this system +6 | src | source code diff --git a/src/readme.md b/src/readme.md index 2d2f019..ee4cf1a 100644 --- a/src/readme.md +++ b/src/readme.md @@ -1,18 +1,20 @@ ## File description ID | File | Function :- | :- | :- -01 | audio_preprocess.py | include some pre-process methods for **audio** -02 | text_preprocess.py | include some pre-process methods for **text** -03 | image_preprocess.py | include some pre-process methods for **image** -04 | distribution.py | distribution calculation -05 | config.py | all configuration and parameters setting for the system running -06 | filters.py | some **filters** used for pre-processing such as kv kernel or other **rich model** -07 | **main.py** | the main program -08 | manager.py | **GPU** management (free GPU selection **automatically**) -09 | dataset.py | tfrecord read and write -10 | layer.py | basic unit in CNN such as **conv layer**, **pooling layer**, **BN layer** and so on -11 | utils.py | some useful tools such as **minibatch**, **get_data_batch**, -12 | run.py | the **train** and **test** of the network **get_weights**, **get_biases** and so on -13 | dataset.py | some functions of tfrecord read and write -14 | networks | all designed networks are contained in this folder, audio and image steganalysis, classification -15 | config_file | three files, config_train, config_test and config_steganalysis, in this folder are uesd to send the paramters into the network, like the usage in Caffe +01 | config_file | three files, config_train, config_test and config_steganalysis, in this folder are uesd to send the paramters into the network, like the usage in Caffe +02 | dct_kernels | dct kernels for high-pass filtering (kernel size: 2,3,4,5,6,7,8) +03 | matlab_scripts | matlab scripts for jpeg image read and write +04 | networks | all designed networks are contained in this folder, audio and image steganalysis, classification +05 | audio_preprocess.py | include some pre-process methods for **audio** +06 | config.py | all configuration and parameters setting for the system running +07 | dataset.py | some functions of tfrecord read and write +08 | distribution.py | distribution calculation +09 | file_preprocess.py | include some pre-process methods for **file** +10 | filters.py | some **filters** used for pre-processing such as kv kernel or other **rich model** +11 | image_preprocess.py | include some pre-process methods for **image** +12 | layer.py | basic unit in CNN such as **conv layer**, **pooling layer**, **BN layer** and so on +13 | **main.py** | the main program +14 | manager.py | **GPU** management (free GPU selection **automatically**) +15 | run.py | the **train** and **test** of the network **get_weights**, **get_biases** and so on +16 | text_preprocess.py | include some pre-process methods for **text** +17 | utils.py | some useful tools such as **minibatch**, **get_data_batch** and so on