Skip to content

christianschuler8989/StudyToolkitVid

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Contributors Forks Stargazers Issues MIT License LinkedIn


Logo

StudyToolkitVid

A toolkit for creating studies to evaluate quality of videos!
Explore the docs »

View Demo (TODO) · Report Bug · Request Feature

Table of Contents
  1. About The Project
  2. Getting Started
  3. Usage
  4. Roadmap
  5. Contributing
  6. License
  7. Contact
  8. Acknowledgments

About The Project

Modular concept of the researchers’ toolkit StudyToolkitVid. From the formulated research question to the media editing to acquire test material, executing an online study and subsequently analysing the resulting data using statistical methods.

Product Name Screen Shot

There are many ways to edit media and investigate human perception based on great implementations available online; however, we didn't find one that really suited our needs while investigating the importance of lip synchrony, so we created this toolkit. We wanted to create a toolkit so easy and intuitive to use that it'll enable less tech-savy people to explore their scientific itches. On the other hand we try to keep it as modular as possible to enable adjustments and partial use of it.

Our reasoning:

  • Everyone loves freely available and easily accessible software.
  • Your time should be focused on creating something amazing while exploring different research questions and not get slowed down by rudimentary issues- trying to solve problems, already solved by others.
  • You also shouldn't be doing the same tasks over and over- especially by hand
    • Creating data sets for investigating perceived quality of video material
    • Setting up and executing a user study based on these (or other) data sets
    • Running and reporting a statistical analysis following the results from a study

Of course, no one toolkit will serve all projects since your needs may be different. While our focus is especially on the needs of investigating lip synchrony in video material, we will also try to add more diverse functionalities in the future. You may also suggest changes by forking this repo and creating a pull request or opening an issue. We appreciate all contributions and want to thank everyone who helps out in any way possible!

(back to top)

Built With

List of major frameworks/libraries used to bootstrap this project:

  • PyQt
    • "PyQt is one of the most popular Python bindings for the Qt cross-platform C++ framework. PyQt was developed by Riverbank Computing Limited."
    • With the use of PyQt we were able to create a pipeline that bridges the following utilities in an easy-to-use and neatly packaged toolkit.
  • WebMAUS
    • "This web service inputs a media file with a speech signal and a text file with a corresponding orthographic transcript, and computes a word segmentation and a phonetic segmentation and labeling."
    • The output of WebMAUS is used in Part 1 - Media Editing of the StudyToolkitVid pipeline.
  • beaqlejs
    • "BeaqleJS (browser based evaluation of audio quality and comparative listening environment) provides a framework to create browser based listening tests and is purely based on open web standards like HTML5 and Javascript."
    • A modified version, to also enable use of video files, is used to create the studies in Part 2 - Study Setup.
  • R
    • "R is a free software environment for statistical computing and graphics. It compiles and runs on a wide variety of UNIX platforms, Windows and MacOS."
    • R is the foundation for the scripts used in Part 3 - Statistical Analysis of this toolkit.

(back to top)

Getting Started

To get a local copy up and running follow these simple steps.

Prerequisites

  • You need a Python installation (tested with: 3.11.5 on macOS & 3.10.9 on Ubuntu)
  • You need to use a terminal (at least once ;) ) For more information about how to work with a terminal, refer to Microsoft's Guide for Windoof, Apple's Guide for macOS, and Ubuntu's Guide for Linux systems.

Installation

Create a directory for the toolkit and all your projects to be saved in. For this description we will call it "MyAwesomeDirectory" Then navigate into this directory and open the terminal from within it.

  1. Clone this repository to get a local copy of the toolkit on your system: Execute the following lines inside of your terminal.
    git clone git@github.com:christianschuler8989/StudyToolkitVid.git
  2. (Optional, but recommended) Create a virtual environment for the toolkit:
    1. (If not yet installed) Install python venv:
    python3 -m pip install virtualenv
    1. Create an environment named "venvToolkit"
    python -m venv venvToolkit
    1. Activate the virtual environment every time before starting the toolkit
    source venvToolkit/bin/activate
  3. Navigate into the cloned toolkit-directory named "StudyToolkitVid", so you end up
    cd StudyToolkitVid
    Assuming you cloned the repository into your "/Home/Download/" directory, you would type
    cd /Home/Download/MyAwesomeDirectory/StudyToolkitVid
  4. Install the requirements:
    python -m pip install -r requirements.txt
  5. Start the toolkit (continue in Usage section below):
    python main.py --run

Data Structure

The structure of the pipelines directories. Arrows indicate file-movement inbetween the different parts of the toolkit. Product Name Screen Shot

Alligning with best practice standards in science, files placed in an "input" directory are only read by the toolkit, never modified directly. Any modification of data takes place inside the "temp" directories, which can then automatically be cleaned up to free space, since all results are to be found in the "output" directories. The output of one step can serve as input for the next, or be used in other ways and form, separate to the toolkit.

(back to top)

Usage

File naming as part of the pipeline for an automated workflow. Product Name Screen Shot

In any project that works with and modifies data in any shape or form, a decision has to be made regarding the naming of files. There is a trade-off between "human-readability" and "preventing-inpractical-clutter". For example: If a media file is modified in numerous different ways and we want the name of the file to contain all applied modifications, we have to be aware of the different limits that a file name can maximally have before encountering errors.

Part 1 - Media Editing

Creating data sets for investigating perceived quality of video material. [TODO]

Part 2 - Study Setup

Setting up and executing a user study based on these (or other) data sets. [TODO]

Part 3 - Statistical Analysis

Running and reporting a statistical analysis following the results from a study. [TODO]

For more information, please refer to the Documentation

(back to top)

Roadmap

  • Finally "git-it-up"
  • Media Editing
    • Core Functionalities
    • Smooth Lip-Asynchrony Introduction
    • Automated Lip Recognition
    • Testing
    • Beginner-Friendly UI
  • Add Study Creation
    • Core Functionalities
    • More User-Customization
    • Testing
    • Beginner-Friendly UI
  • Add Statistical Analysis
    • Core Functionalities
    • More User-Customization
    • Result Exploration
    • Automated Visualizations
    • Testing
  • Automated Installation/Setup
  • General Testing
    • Core functionalities
    • Advanced functionalities
    • Different Operating Systems
      • Ubuntu 22.04
      • Windoof 10
      • macOS Monterey 12.3
  • Quality of Life
    • Example Media Files
    • Tool-Tip Pop-Ups
    • Documentation
    • User Guide
  • Multi-Language Support
    • English
    • German
    • Chinese
    • Spanish
  • Expand File Format Support
    • .mp4
    • Other Video Formats
    • Text (Literature & Translation Studies)
    • Image (Art & Computer Vision Studies)

Go to the open issues section to propose new features or simply report encountered bugs.

(back to top)

Contributing

Contributions are what make the open source community such an amazing place to learn, inspire, and create. Any contributions you make are greatly appreciated.

If you have a suggestion that would make this better, please fork the repo and create a pull request. You can also simply open an issue with the tag "enhancement". Don't forget to give the project a star! Thanks again!

  1. Fork the Project
  2. Create your Feature Branch (git checkout -b feature/AmazingFeature)
  3. Commit your Changes (git commit -m 'Add some AmazingFeature')
  4. Push to the Branch (git push origin feature/AmazingFeature)
  5. Open a Pull Request

(back to top)

License

Distributed under the GNU License. See LICENSE.txt for more information.

(back to top)

Contact

Christian Schuler - GitHub Page - christianschuler8989(4T)gmail.com

Dominik Hauser - do_340(4T)hotmail.de

Anran Wang - @AnranW - echowanng1996(thesymbolforemail)hotmail.com

(back to top)

Acknowledgments

A list of helpful resources we would like to give credit to:

(back to top)

About

Study toolkit for human perception of video material.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published