Skip to content


Fabien Spindler edited this page Jan 8, 2019 · 7 revisions

Project page for ViSP Google Summer of Code 2018 (GSoC 2018)

General Information

  • GSoC 2018 site
  • Timelines:
  • Important dates:
    • January 23 Mentoring organization application deadline
    • February 12 List of accepted mentoring organizations published
    • March 12 Student application period opens
    • April 23 Accepted student proposals announced
    • May 14 Coding officially begins
    • Work Period Students work on their project with guidance from Mentors
    • June 11 Mentors and students can begin submitting Phase 1 evaluations
    • June 15 Phase 1 Evaluation deadline; Google begins issuing student payments
    • Work Period Students work on their project with guidance from Mentors
    • July 9 Mentors and students can begin submitting Phase 2 evaluations Work Period Students continue working on their project with guidance from Mentors
    • July 13 Phase 2 Evaluation deadline
    • August 6 Students submit their final work product and their final mentor evaluation
    • August 14 Mentors submit final student evaluations
    • August 22 Final results of Google Summer of Code 2018 announced


How you will be evaluated if you are an accepted student

Student projects to be paid only if:

  • 1st phase (May 14 - June 8, 2018): The student submits a pull request that...
    • Builds, works with build bot (travis-ci)
    • Has, at least, stubbed out new functionality
  • 2nd phase (June 18 - July 6, 2018): The student submits a pull request with...
    • Same as above, plus:
    • Code has appropriate Doxygen documentation
    • Has a stubbed out example/tutorial that builds/runs without errors
    • It observes the recommendations in "How to contribute", which include C++ style suggestions.
  • Final phase (July 16 - August 6, 2018):
    • A full pull request
      • Fulfilling all required objectives described in the corresponding project
      • Full Doxygen documentation
      • A tutorial if appropriate
      • A working example or demo
    • Create a video (e.g. on YouTube) that demonstrates your results

For students interested in applying

  1. For software development skills, please refer to the project description
  2. Take your time to learn about ViSP, watching some YouTube videos, reading tutorials, downloading it and launching tutorials or example.
  3. Ask to join the ViSP GSoC 2018 Forum List
    • Discuss projects below or other ideas with us between now and March
  4. In March, go to the GSoC site and sign up to be a student with ViSP
  5. Post the title of the project (from the list below or a new one if we have agreed) on the mailing list
    • Include name, email, age
    • Include how you think you are qualified to accomplish this project (skills, courses, relevant background)
    • Include country of origin, school you are enrolled in, Professor you work with (if any)
    • Include a projected timeline and milestones for the project
    • Precise which 3rd party libraries you plan to use
  6. If ViSP gets accepted as GSoC organisation this year and you’ve signed up for a project with us in March
    • We will dispatch the students and projects to the appropriate mentors
    • Accepted students will be posted on the GSoC site in May (and we will also notify the accepted students ourselves).

2018 project ideas

List of potential mentors (pairing of projects to mentors will be done when Google decides the number of slots assigned to ViSP):

List of potential backup mentors:

Project #1: Augmented reality demonstration with ViSP and Unity

This project was proposed last year. The work that was done last year is available on github. Since the results were not significant enough, we propose this project again. Regarding what was proposed last year and the difficulties the student had to introduce user interaction in order to initialise blob tracking algorithm, this year we will focus on Apriltag detection that doesn't require user initialisation.

  • Brief description:

    • ViSP offers several methods to track and estimate the pose of the camera. Basic methods intend to estimate the camera pose from 2D/3D point correspondences. 2D point coordinates are obtained by detecting and tracking fiducial markers using for instance the tracking of blobs (corresponding tutorial) or the detection of an AprilTag corners (corresponding tutorial).
    • Advanced methods rely on the knowledge of the CAD model of the object to track. The model-based tracker (see tutorial) allows to track and estimate the camera pose of a markerless object using multiple types of primitives (edges, texture or both).
    • The following images illustrate the usage of these localization methods in an augmented reality application. In the left image, the camera pose is estimated from the tracking of 4 blobs. This pose is then exploited to project a virtual Ninja in the image (image in the middle). In the right image the model based tracker allows to localize the castel. A virtual car is then introduced in the scene. These results are obtained from the existing ViSP AR module based on Ogre3D.
      Original CAD model ViSP CAD model CAD model tracking
  • Getting started: Student interested by this project could

    • Become familiar with the work that was done last year
    • Use the plugin from last year project to test the image communication between C++ and Unity: read an image from ViSP plugin and display it in Unity
    • Write a plugin to perform homogeneous matrix operations from unity, based on ViSP. As the final goal is to perform augmented reality it will be based on homogeneous matrices multiplications, so a good starting point can be to deplace a GameObject in the scene based on the homogeneous matrix of the displacement desired and the current position of the object. This task can be described using the following equation, based on 3 homogeneous matrices : Mfinal = Mcurrent * Mtransform. Where Mfinal contains the coordinates of your GameObject after the transformation, Mcurrent contains the coordinates of your GameObject before the transformation, and Mtransform is the displacement of the transformation (expressed in the frame of the current GameObject position). Warning : Unity uses a left-handed coordinate system but ViSP uses a right-handed one, this has to be taken into account when converting a unity vector/matrix to a vpHomogeneousMatrix in ViSP.
  • Expected results:

    • Adapt ViSP for Unity. We expect here a getting started tutorial that shows how to use ViSP C++ code in Unity to do a matrix computation. The goal of the project is to perform augmented reality based on homogeneous matrices multiplications (, and adapt the result in unity left-handed coordinate system. A good starting is to look on unity class to place objects in scenes (, and on Unity native plugin mechanism (first evaluation)
    • Provide interconversion methods between Unity's 2DTexture and ViSP's image. This is already available in the repository of last year GSoC project, but has to be integrated and well understood by the student of this year (first evaluation)
    • Develop the functionalities to apply the camera intrinsic parameters in Unity scene in order that the camera field of view matches with Unity scene
    • Using Unity's WebCamTexture and ViSP Apriltag detection capabilities create a demonstration and tutorial.
    • Create an augmented reality demonstration from Apriltag detection.
    • Extend to markerless model-based tracker and run a complete AR demo on a mobile platform. Initialization could be first done by user interaction, then in a second step if the device is powerful enough replaced by an initial localization using ViSP detection and localization tool based on OpenCV.
  • Application instructions: Student should precise which material is used for the development (Windows, Mac OSX, Linux), and Unity version.

  • Knowledge prerequisites: C++, Unity (C#), and some knowledges about image processing/augmented reality.

  • Difficulty level: Middle

Project #2: Porting ViSP to Android devices

  • Brief description:

    • ViSP is already packaged for iOS. Community is asking a version for Android
    • The objective of this project is to propose to the community a SDK for Android and a set of tutorials for beginners
  • Getting started: Since ViSP architecture is very similar to the one adopted by OpenCV, student interested by this project could

    • Follow all the tutorials proposed by OpenCV in order to become familiar with Android and the tools
    • Understand how OpenCV Android SDK is build from the source code using an Android toolchain
  • Expected results:

    • All the tools and python script inspired from OpenCV required to build ViSP for Android from source code (CMake toolchain file, python script to build the SDK...)
    • If needed, adapt ViSP source code and fix potential issues to be as much as possible compatible with Android
    • A tutorial showing how to build ViSP Android SDK from source code.
    • A tutorial that shows how to use an Android live camera, convert images in ViSP vpImage data and apply simple image processing algorithms part of the imgproc module
    • A tutorial that shows how to interact with the user to initialise an algorithm. A good example would be the selection from the touch screen of a region of interest used to start a blob tracking and display the result of the tracking (position of the cog, roi...)
    • A tutorial that allows to detect one or more Apriltags using vpDetectorApriltag class, display the position of the tag in overlay
    • All the tutorials should be added to existing tutorials in a dedicated section called "ViSP for Android"
    • All other ideas are welcome...
  • Application instructions: Student should precise which device is used for the development and which Android version it is running.

  • Knowledge prerequisites: C++, Android studio, Android NDK

  • Difficulty level: High since ViSP developpers are not familiar with Android development yet.

You can’t perform that action at this time.