Skip to content

Product Requirements Document (PRD)

Angel Ortega edited this page Oct 28, 2016 · 94 revisions

Awesome Team Project Name Here – Product Requirements Document (PRD)

Team Name: Your Fire Nation

Authors: Alex Thielk, Andrew Tran, Angel Ortega, Eric Swenson, Gustavo Cornejo

As specified by the lecture slides.

Table of Contents

  1. Introduction
  2. System Architecture Overview
  3. Requirements (Functional & Non-Functional)
  4. System Models: Contexts, Sequences, Behavioral/UML, State
  5. Appendices

Introduction

Problem Statement

Picosatellites require the positional information and attitude of neighboring picosatellites to maintain their formation. Ideally, this information should be calculated on-board with low latency and resource consumption (namely power, which would be particularly limited on picosatellites). Therefore, the proposed solution is to use the AeroCube and its on-board camera to retrieve photos and process them to identify other AeroCubes. Other AeroCubes will be identifiable via fiducial markers that will have information on the AeroCube's identification as well as its relative position and attitude.

Fiducial Markers will be placed on each face of the AeroCubes. These markers will contain information about the face and ID of the AeroCube they are on. Using that information coupled with the rotation, skew, and size of the marker will allow the calculation of the quaternion of the AeroCube (or more generally, the relative position and attitude of the AeroCube).

In addition, it is also necessary to approach the problem with a highly modular design patterns; we do not have much of any information as to the potential ways that our deliverables would be incorporated into the actual processes running on an AeroCube, nor how those processes might be implemented. As a result, we need to provide effective interfaces for the use of the applications we write.

Technical Goals/Objectives

Base Goal

  • Use a web application and Flask server to simulate the AeroCube's other systems, namely the output of the camera module
  • Pass in images to the controller, which then passes on those pictures and any additional parameters to the Image Processing (ImP) component
  • Scan images using the Image Processing component to identify fiducial markers (and consequently, other AeroCubes), and return information on their IDs, position, and attitude
  • Persist information from the Image Processing process by having the controller call the database controller, which will make calls to the database
  • Use the web application to access the database in order to query for and visualize information

Stretch Goals

  • Apply TensorFlow machine learning to ImP to improve results
  • Process video of AeroCubes in order to track their movement

Assumptions

  • On-board camera will be able to take pictures at a 60 degree FOV
  • Fiducial markers may only be up to 3 cm by 3 cm
  • Neighboring AeroCubes will be about 1 m away from camera

Background

Satellites require maintenance for fueling, updates, and repairs. Astronauts undergo missions to service these satellites, via manual retrieval from a shuttle or remote communication from a shuttle. The former maintenance method is costly, limited to high-priority space objects in orbit, and do not use autonomous algorithms for removing labor from the astronaut. Orbital communication from a shuttle to a satellite is often unreachable[6]. To address power consumption concerns and maintenance, a satellite must utilize optimal power consumption to compute quality-standard attitude measures.

Swarms of picosatellites are now being prototyped to address these maintenance problems [6]. Theory has proposed using swarms of picosatellites to track and tether space objects in a network [6]. Space tethering has created solutions for momentum exchange, formation flying, propulsion, creating electromagnetic and kinematic generators, studying orbital plasma dynamics, and ‘tidal stabilization’ (i.e., gravity gradient stabilization) [9]. Space tethering researchers have hypothesized the potential creation for a class of ‘Space Elevators’ for moving orbital-objects across altitudes[9]. In 2007, a space tethering experiment, Multi-Application Survivable Tether (MAST), deployed a swarm of CubeSats (three geosynchronous CubeSats) in space to study tether survivability. Other notable attempts (successful, active, failed, or destroyed) include: ‘Vegan Flight Maid’ for communication research, and GeneSat-1 for biological research and technology advances [8]. A CubeSat deployment (AAU CubeSat), most applicable for the purpose of creating an attitude control system with cameras, was launched by the Aalborg University in Denmark on June 30, 2003 [1]. The mission was only partially successful; upon orbit, the CubeSats were unable to send and receive communication, and the battery had failed after a month.

One approach in improving spacecraft autonomy lies in Computer Vision and Machine Learning algorithms. These algorithms have addressed the generic problem of object recognition, classification, localization, and 3D motion-tracking. Computer Vision has already been utilized as a technique in various aerospace missions, including allowing a robotic servicer spacecraft to identify a satellite's features in a gamut of lighting conditions in the Robotic Refueling Mission (RRM) Vision Task [7].

Picosatellites have not traditionally used computer vision algorithms for (a) identifying neighboring picosatellites and (b) estimating attitude and position of neighboring satellites. Because picosatellites are not retrieved from space due to power consumption concerns and maintenance, a picosatellite must utilize optimal power consumption to compute quality-standard attitude measurements. However, provided with high computing power, past algorithms can now be performed at efficient speeds. Machine Learning algorithms, like Deep Neural Networks (DNNs), are now capable of processing on embedded computing devices such as the NVIDIA Jetson TX1. The architecture advances in high performance CUDA computing of merely the last three or four years have simultaneously allowed for significant efficiency gains as well as smaller performance gains.

Before the presence of deep learning in Computer Vision, object identification algorithms had high error rates (~20-30%)[5]. Computer Vision algorithms have the problem of dispersing responsibility to researchers, developers, and experts for constructing a feature set to correctly identify objects [1][3]. However, considering that computers default to recognizing an image as a matrix of varying pixel densities, the developers of a Computer Vision feature set are constrained to identifying features that can predict images based off pixel densities in a matrix [3]. Human-developed object classification algorithms using pixel density features are not intuitive for uniquely shaped or structured objects.

Now, alternative approaches to predict behavior from pixel densities exist. A subsidiary of Machine Learning, Deep Learning allows for a computer to learn to construct computer vision features for classifying an object. With iterations in Deep Learning, Deep Learning algorithms can correctly identify objects and perform classification tasks at human performance levels (~5% error rate) [2][4][5]. Adapting this Deep Neural Network model to picosatellites for the purpose of optimal pose estimation and picosatellite identification has not been performed. This multivariate objective of measuring attitude and identifying picosatellites at low power consumption with deep learning and computer vision algorithms on a small embedded computing device provides a large step in learning to solve complex problems on increasingly small and powerful devices.


[1] Alminde, Lars, Morten Bisgaard, Dennis Vinther, Tor Viscor, and Kasper Ostergard. "Educational value and lessons learned from the aau-cubesat project." In Recent Advances in Space Technologies, 2003. RAST'03. International Conference on. Proceedings of, pp. 57-62. IEEE, 2003.

[2] Amodei, Dario, Rishita Anubhai, Eric Battenberg, Carl Case, Jared Casper, Bryan Catanzaro, Jingdong Chen et al. "Deep speech 2: End-to-end speech recognition in english and mandarin." arXiv preprint arXiv:1512.02595 (2015).

[3] Graduate Summer School: Deep Learning, Feature Learning "Deep Learning, Self-Taught Learning and Unsupervised Feature Learning (Part 1 Slides1-68; Part 2 Slides 69-109)" http://helper.ipam.ucla.edu/publications/gss2012/gss2012_10595.pdf

[4] Han, Tony. "Around the World in 60 Days: Getting Deep Speech to Work in Mandarin." Around the World in 60 Days: Getting Deep Speech to Work in Mandarin. Baidu Inc, Feb. 2016. Web. 26 Oct. 2016. http://svail.github.io/mandarin/.

[5] Huang, Jen-Hsun, and Andrew Ng. "GTC China: AI, Deep Learning with Jen-Hsun Huang & Baidu's Andrew Ng." YouTube. NVIDIA, 21 Sept. 2016. Web. 26 Oct. 2016. http://www.youtube.com/watch?v=FPM3nmlaN00.

[6] Jasiobedski, Piotr, Michael Greenspan, and Gerhard Roth. Pose determination and tracking for autonomous satellite capture. National Research Council of Canada, 2001.

[7] "RRM Task : Launch Lock Removal and Vision." NASA. NASA, n.d. Web. 28 Oct. 2016.

[8] Wikipedia contributors, "List of CubeSats," Wikipedia, The Free Encyclopedia, <https://en.wikipedia.org/w/index.php?title=List_of_CubeSats&oldid=740262725? (accessed September 20, 2016).

[9] Wikipedia contributors, "Space tether," Wikipedia, The Free Encyclopedia, https://en.wikipedia.org/w/index.php?title=Space_tether&oldid=731884778 (accessed July 28, 2016).

System Architecture Overview

High-Level Diagram

LucidChart Components Diagram Link LucidChart Diagram

Components

Image Processing (ImP) - given image input, responsible for "scanning" the image and returning information about identified AeroCubes and their relative position and attitude

Controller - instantiated whenever a scan occurs, and responsible for passing parameters to ImP as well as passing ImP's output back to whatever called it; additionally, also responsible for interacting with the database controller to persist the results

Database Controller (DB Controller) - abstracts calls to the database for the controller

Server - accepts requests (e.g., images and parameters) from the web app and forwards them to the controller; serves in place of other components on the Jetson, which would be interacting with the controller in practice

Web App - serves as a human-friendly, graphical interface to the scanning capabilities of the ImP; queries the database directly instead of going through the database controller

User Interaction/Experience

Human user interaction will occur through the web application, providing parameters or image(s) for the ImP to run with as well as configuration containing a target for persistent storage to store resulting information with.

Requirements (Functional & Non-Functional)

Functional

  • Given an image, the program should be able to identify fiducial markers (placed on AeroCubes)
  • Having identified a fiducial marker, the program should be able to read the ID of the AeroCube encoded by the marker
  • Having identified a fiducial marker, the program should be able to calculate relative attitude and position
  • Web app interface will allow for access to the image scanner, especially for testing and monitoring

Non-Functional

  • Program needs to be able to scan an image and compute information about identified fiducial markers in under 4 seconds
  • Image processing should be able to run under a defined threshold of power consumption

User Types:

  • Operator: A person who is engaging with the Aerocube Web App either from a station or shuttle.
  • External Programmer: A person with administrative access to the Aerocube device and Aerocube Web App

User Stories / Use Cases

  • User Story: As an operator, I want to see what items the acting AeroCube has scanned in an image so that I can check for the presence of other AeroCubes
    • Acceptance case: ImP should be able to accept image input and return information about identified AeroCubes and their relative position and attitude
  • User Story: As an operator, I want to view the results of a scan as well as the image input to the scan so that I can visually compare the input and output of the program
    • Acceptance case: interface on web app should include option to view original image and ImP output simultaneously (e.g., either on adjacent displays, or with information overlaid on image)
  • User Story: As an external program, be able to authenticate and gain permissions from AeroCube
    • Acceptance case: External Program can communicate with the AeroCube
  • User Story: As an external program, given authentication and/or other permissions, I want to be able to initiate a new scan so that I can learn information about scanned/nearby AeroCubes
    • Acceptance case: controller API should allow for other programs to call it with parameters
  • User Story: As an operator, I want to be able to log-in to the webapp to access its control and information displaying UX
    • Acceptance case: the web app should not offer any UI/UX relating to the cube-sat information or controls unless a user is authenticated.
  • User Story: As an operator, I want to use the web-app to access information and be able to modify settings/parameters so that I can have an interface to the AeroCube's scanning capabilities
    • Acceptance case: web app should allow an operator to modify controller settings/parameters
  • User Story: As an operator, I want to use the web-app to access a backlog so that I can view more verbose, detailed output of the AeroCube's activity
    • Acceptance case: web app should contain way to view any logs generated by the programs on the AeroCube
  • User Story: As an operator, I want to send an image to the AeroCube via the web-app so that I can scan images
    • Acceptance case: web app should allow an operator to upload an image for ImP scanning
  • User Story: As an operator, I want to use the web-app to view the database and visualize statics through graphs and charts so that I can have a better understanding of the AeroCube scan results
    • Acceptance case: web app should have interface for operator to view database information in visually appealing way
  • User Story: As an operator, I want to store the results of a scan off the AeroCube so I can access the data even when AeroCube is hibernating
    • Acceptance case: after a scan the results should be saved to a database off of the AeroCube

Prototyping Code/Tests

Metrics

  • Speed: pictures per second
  • Processing Time: time required to complete processing a single image as well as storing and returning results
  • Attitude Accuracy: degrees from actual relative axis
  • Search Accuracy: percent of correctly finding AeroCube in image

System Models: Contexts, Sequences, Behavioral/UML, State

Appendices

Disambiguation

  • None of the components built for this product should directly control the attitude or positioning of either the acting AeroCube or any identified AeroCubes
  • None of the components built for this product should communicate with AeroCubes outside of the acting AeroCube

Technologies Employed

Glossary / Style Guide

Glossary of brief descriptions for important terms used in this project that also doubles as a style guide. Further documentation (both in code comments, other wiki pages, or elsewhere) should try to adhere to any style precedents set in this section.

  • AeroCube - shorthand for the Aerospace Picosatellite; to be used in favor of PicoSat (generic term for small satellites weighing 0.1 to 1 kg), CubeSat (standard form-factor picosatellite)
  • Attitude - the measure of a combination of position and orientation; use in favor of "pose estimation"
  • Fiducial Marker - object/marker in image used as a point of reference or a measure
  • JetPack L4T - (a.k.a. Jetson Development Pack for Linux for Tegra) bundle installer for software tools required for Jetson
  • Jetson TX1 - embedded system-on-module (SoM) useful for computer vision, deep learning (deployable on AeroCubes)
  • Pose Estimation - see "attitude"; not to be confused with position estimation
  • Position Estimation - not to be confused with pose estimation (see "attitude" or "pose estimation")
  • Scan - to process an image in order to obtain useful information from it, namely the presence of other AeroCubes and their pose, attitude
  • Tegra - mobile NVIDIA GPU

Meeting Documents