Permalink
Browse files

Documentation edits.

  • Loading branch information...
1 parent e3f13fb commit bb6db60c57da7926a1b3ba32b92f6b014f535161 Matthew Hancher committed Dec 4, 2006
View
2 AUTHORS
@@ -17,4 +17,4 @@ Patrick Mihelich -- HDR
Aaron Rolett -- Mosaic
Randy Sargent -- Mosaic
Ian Saxton -- GPU
-Joey Gannon -- Camera
+Joey Gannon -- Camera
View
18 README
@@ -293,6 +293,8 @@ to:
vision-workbench-owner@lists.nasa.gov
+Please do NOT use this second list for technical inquiries, which
+should all be sent to the main vision-workbench list above.
B. Credits
@@ -306,16 +308,6 @@ collaboration with the Adaptive Control and Evolvable Systems (ACES)
group, and draws on their experience developing computer vision
techniques for autonomous vehicle control systems.
-The lead developers are:
-Laurence Edwards (IRG, Project Manager): Core, Math, Mosaic
-Michael Broxton (IRG): Core, Math, GPU, HDR, Camera, Cartography
-Matthew Hancher (ACES): Core, Math, InterestPoint, Mosaic
-
-Additional development by:
-Kerri Cahoy (IRG): testing
-Matthew Deans (IRG): InterestPoint
-Patrick Mihelich (IRG): HDR
-Aaron Rolett (IRG): Mosaic
-Randy Sargent (CMU/IRG): Mosaic
-Ian Saxton (IRG): GPU
-
+The lead developers of the Vision Workbench are Laurence Edwards
+(IRG, Project Manager), Michael Broxton (IRG), and Matthew Hancher
+(ACES). See the AUTHORS file for a complete list of developers.
View
16 docs/workbook/acknowledgements.tex
@@ -0,0 +1,16 @@
+\chapter*{Acknowledgements}
+
+Many people have contributed to making this first Vision Workbench
+open-source release a reality. Thanks to Terry Fong, Kalamanje
+KrishnaKumar, and Dave Korsmeyer in the Intelligent Systems Division
+at NASA Ames for supporting us in this research and allowing us to
+pursue our crazy dreams. Thanks to Larry Barone, Martha Del Alto,
+Robin Orans, Diana Cox, and everyone else who helped us navigate the
+NASA open source release process. Thanks to Randy Sargent, Matt
+Deans, Liam Pedersen, and the rest of the Intelligent Robotics Group
+at Ames for lending their incredible expertise and being our guinea
+pigs time and again. Thanks to our interns---Kerry Cahoy, Ian Saxton,
+Patrick Mihelich, Joey Gannon, and Aaron Rolett---for bringing many
+exciting features of the Vision Workbench into being. Finally, thanks
+to all our users, past, present and future, for making software
+development enjoyable and worthwhile.
View
13 docs/workbook/gettingstarted.tex
@@ -1,7 +1,7 @@
\chapter{Getting Started}\label{ch:gettingstarted}
This chapter describes how to set up and start using the Vision
-Workbench. It describes how to obtain the Vision Workbench and its
+Workbench. It explains how to obtain the Vision Workbench and its
prerequisite libraries, how to build and install it, and how to build
a simple example program. This chapter does {\it not} discuss how to
program using the Vision Workbench. If that's what you're looking for
@@ -10,7 +10,7 @@ \chapter{Getting Started}\label{ch:gettingstarted}
\section{Obtaining the Vision Workbench}
Most likely if you are reading this document then you already know
-where to obtain a copy of the Vision Workbench sources, if you haven't
+where to obtain a copy of the Vision Workbench sources if you haven't
obtained them already. However, if not, a link to the most up-to-date
distribution will always be available from the NASA Ames open-source
software website, at \verb#opensource.arc.nasa.gov#.
@@ -26,7 +26,7 @@ \section{Obtaining the Vision Workbench}
\begin{table}[t]\begin{centering}
\begin{tabular}{|l|l|l|} \hline
-Name & Used By & Source \\ \hline \hline
+Name & Used By & Source \\ \hline \hline
Boost & All & \verb#http://www.boost.org/# \\ \hline
PNG & FileIO (opt.) & \verb#http://www.libpng.org/# \\ \hline
JPEG & FileIO (opt.) & \verb#http://www.ijg.org/# \\ \hline
@@ -95,9 +95,10 @@ \section{Building the Vision Workbench}
online from the GnuWin32 project at \verb#gnuwin32.sourceforge.net#.
You will need to configure your project's include file and library
search paths appropriately. Also be sure to configure your project to
-define the preprocessor symbol NOMINMAX to disable the non-portable
-Windows definitions of \verb#min()# and \verb#max()# macros which
-interfere with the standard C++ library functions of the same names.
+define the preprocessor symbol \verb#NOMINMAX# to disable the
+non-portable Windows definitions of \verb#min()# and \verb#max()#
+macros, which interfere with the standard C++ library functions of the
+same names.
\section{A Trivial Example Program}
View
BIN docs/workbook/images/mars_dem.png
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
View
BIN docs/workbook/images/moon_sphere.jpg
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
View
BIN docs/workbook/images/scout_hdr.jpg
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
View
BIN docs/workbook/images/scout_ldr.jpg
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
View
85 docs/workbook/introduction.tex
@@ -5,11 +5,11 @@ \chapter{Introduction}
vision library. The Vision Workbench was developed through a joint
effort of the Intelligent Robotics Group (IRG) and the Adaptive
Control and Evolvable Systems Group (ACES) within the Intelligent
-Systems Division at NASA's Ames Research Center. It is distributed
-under the NASA Open Source Agreement (NOSA) version 1.3, which has
-been certified by the Open Source Initiative (OSI). A copy of this
-agreement is included with every distribution of the Vision Workbench
-in a file called {\tt COPYING}.
+Systems Division at the NASA Ames Research Center in Moffett Field,
+California. It is distributed under the NASA Open Source Agreement
+(NOSA) version 1.3, which has been certified by the Open Source
+Initiative (OSI). A copy of this agreement is included with every
+distribution of the Vision Workbench in a file called {\tt COPYING}.
You can think of the Vision Workbench as a ``second-generation'' C/C++
image processing library. It draws on the authors' experiences over
@@ -18,16 +18,35 @@ \chapter{Introduction}
of image processing algorithms in C. We have tried to select and
improve upon the best features of each of these approaches to image
processing, always with an eye toward our particular range of NASA
-research applications.
+research applications. The Vision Workbench has been used within NASA
+for a wide range of image processing tasks, including alignment and
+stitching of panoramic images, high-dynamic-range imaging, texture
+analysis and recognition, lunar and planetary map generation, and the
+prodution of 3D models from stereo image pairs. A few examples of
+image data that has been processed with the Vision Workbench are show
+in Figure~\ref{fig:examples}.
+
+\begin{figure}[p]
+\centering
+ \subfigure[]{\hbox{\hspace{0.25in}\includegraphics[width=5in]{images/mars_dem.png}\hspace{0.25in}\ \label{fig:examples.marsdem}}}
+ \\
+ \subfigure[]{\hbox{\hspace{0.25in}\includegraphics[width=2.75in]{images/scout_ldr.jpg}\hspace{0.25in}\ \label{fig:examples.scoutldr}}}
+ \hfil
+ \subfigure[]{\hbox{\hspace{0.25in}\includegraphics[width=2.75in]{images/scout_hdr.jpg}\hspace{0.25in}\ \label{fig:examples.scouthdr}}}
+ \\
+ \subfigure[]{\includegraphics[width=2.5in]{images/moon_sphere.jpg}\label{fig:examples.moon}}
+\caption{Examples of image data processed with the help of the Vision Workbench. (a) A Martian terrain map generated from
+stereo satellite imagery. (b,c) Original and high-dynamic-range image mosaics from a NASA field test. (d) A lunar
+base map generated from the Clementine data set. }
+\label{fig:examples}
+\end{figure}
The Vision Workbench was designed from the ground up to make it quick
-and easy to produce reasonably efficient implemenations of a wide
-range of image processing algorithms. In many cases code written
-using the Vision Workbench is significantly smaller and more readable
-than code written using more traditional approaches. We have found
-that users who begin using the Vision Workbench to solve their image
-processing problems never want to go back. At its core is a rich set
-of template-based image processing data types representing pixels,
+and easy to produce efficient implemenations of a wide range of image
+processing algorithms. In many cases code written using the Vision
+Workbench is significantly smaller and more readable than code written
+using more traditional approaches. At its core is a rich set of
+template-based image processing data types representing pixels,
images, and operations on those images, as well as mathematical
entities like vectors and geometric transformations and image file
I/O. On top of this core it also provides a number of higher-level
@@ -44,33 +63,33 @@ \chapter{Introduction}
lacks a number of common features that the authors have simply not yet
had a need for, such as morpological operations. If you encounter one
of these holes while using the Vision Workbench please let us know: if
-it is an easy hole to fill we may be able to do so quickly, though we
-cannot promise anything. Finally, there are many application-level
-algorithms, such as face recognition, that have been implemented using
-other computer vision systems and are not currently provided by the
-Vision Workbench. If one of these meets your needs there is no
-compelling reason to re-implement it using the Vision Workbench
-instead. On the other hand, if no existing high-level tool solves
-your problem then you may well find that the Vision Workbench provides
-the most productive platform for developing something new.
+it is an easy hole to fill we may be able to do so quickly. Finally,
+there are many application-level algorithms, such as face recognition,
+that have been implemented using other computer vision systems and are
+not currently provided by the Vision Workbench. If one of these meets
+your needs there is no compelling reason to re-implement it using the
+Vision Workbench instead. On the other hand, if no existing
+high-level tool solves your problem then you may well find that the
+Vision Workbench provides the most productive platform for developing
+something new.
Since this is the first public release of the Vision Workbench, we
thought we should also provide you with some sense of the direction
the project is headed. It is being actively developed by a small but
growing team at the NASA Ames Research Center. A number of features
-are currently being developed internally and may or may not be
-released in the future, including improved mathematical optimization
-capabilities, a set of Python bindings, and stereo image processing
-tools. Due to peculiarities of the NASA open-source process we cannot
-provide snapshots of code that is under development and not yet
-approved for public release. If you have a specific use for features
-that are under development, or in general if you have suggestions or
-feature requests, please let us know. We cannot promise anything, but
-knowing our users' needs will help us prioritize our development and,
-in particular, our open-source release schedule.
+are currently being developed internally and may released in the
+future, including improved mathematical optimization capabilities, a
+set of Python bindings, and stereo image processing tools. Due to
+peculiarities of the NASA open-source process we cannot provide
+snapshots of code that is under development and not yet approved for
+public release. If you have a specific use for features that are
+under development, or in general if you have suggestions or feature
+requests, please let us know. Knowing our users' needs will help us
+prioritize our development and, in particular, our open-source release
+schedule.
We hope that you enjoy using the Vision Workbench as much as we have
enjoyed developing it! If you have any questions, suggestions,
compliments or concerns, please let us know. Contact information is
-available at the bottom of the README file included with your
+available at the bottom of the \verb#README# file included with your
distribution.
View
61 docs/workbook/mosaic_module.tex
@@ -27,14 +27,18 @@ \section{{\tt ImageComposite} and Multi-Band Blending}\label{sec:imagecomposite}
type. In most cases you will want to use a pixel type that has an
alpha channel, and if you want to perform image blending then the
pixel type must be floating-point, so the most common pixel type is
-\verb#PixelRGBA<float32>#. You can then configure whether you would
-like to use multi-band blending to merge your images or if you would
-simply like them overlayed by using the \verb#set_draft_mode()#
-method. It takes a single argument which sould be \verb#true# if
-you simply want to overlay the images and \verb#fales# if you want
-to use the blender. Blending is a significantly more expensive
-operation. If your images do not overlap but are simply tiles then
-draft mode is probably what you want.
+\verb#PixelRGBA<float32>#. You can then configure whether you would
+like to use multi-band blending to merge your images or if you would
+simply like them overlayed by using the \verb#set_draft_mode()#
+method. It takes a single argument which sould be \verb#true# if you
+simply want to overlay the images and \verb#fales# if you want to use
+the blender. Blending is a significantly more expensive operation.
+If your images are simply tiles and do not overlap then draft mode is
+probably what you want. In blending mode you also have the option of
+asking the blender to attempt to fill in any missing
+(i.e. transparent) data in the composite using information from the
+neighboring pixels. You can enable or disable this behavior by
+calling the the \verb#set_fill_holes()# method.
Once you have created the composite object, you add source images to
it using the \verb#insert()# method, which takes three arguments: the
@@ -48,18 +52,20 @@ \section{{\tt ImageComposite} and Multi-Band Blending}\label{sec:imagecomposite}
want to shift an image by a fractional amount you will first need to
transform it accordingly. In most cases you will need to
pre-transform your source images anyway, so this applies no extra
-cost.
-
-Once you have added all your images, be sure to call the
-\verb#ImageComposite#'s \verb#prepare()# method. This method takes
-no arguments, but it does two things. First, it computes the
-overall bounding box of the images that you have supplied, and
-shifts the coordinate system so that the minimum pixel location
-is $(0,0)$ as usual. Second, if multi-band blending is enabled,
-it generates a series of mask images that are used by the blender.
-Currently these are saved as files in the current working directory.
-This is admittedly inconvenient behavior and will be changed in a
-future release.
+cost.
+
+Once you have added all your images, be sure to call the
+\verb#ImageComposite#'s \verb#prepare()# method. This method takes no
+arguments, but it does two things. First, it computes the overall
+bounding box of the images that you have supplied, and shifts the
+coordinate system so that the minimum pixel location is $(0,0)$ as
+usual. (For advanced users, if you prefer to use a different overall
+bounding box you may compute it yourself and pass it as an optional
+\verb#BBox2i# argument to the \verb#prepare()# method.) Second, if
+multi-band blending is enabled, it generates a series of mask images
+that are used by the blender. Currently these are saved as files in
+the current working directory. This is admittedly inconvenient
+behavior and will be changed in a future release.
Now that you've assembled and prepared your composite you can use
it just like an ordinary image, except that per-pixel access is
@@ -108,8 +114,8 @@ \section{{\tt ImageComposite} and Multi-Band Blending}\label{sec:imagecomposite}
\subfigure[Draft mode (simple overlay)]{\includegraphics[width=6.5in]{images/kiabab-draft.jpg}\label{fig:kiabab.draft}}
\\
\subfigure[Multi-band blending]{\includegraphics[width=6.5in]{images/kiabab-blend.jpg}\label{fig:kiabab.blend}}
-\caption{A twelve-image mosaic composited using an {\tt ImageMosaic} object, first in
-draft mode (a) and then musing multi-band blending (b).}
+\caption{A twelve-image mosaic composited using an {\tt ImageMosaic} object, first (a) in
+draft mode and then (b) using multi-band blending.}
\label{fig:blend.kiabab}
\end{figure}
@@ -135,7 +141,7 @@ \section{{\tt ImageQuadTreeGenerator}}\label{sec:quadtreegenerator}
on disk in a hierarchical manner, making it easy to instantly access any
tile at any resolution.
-Like most everything else in the Vision Workbench, a \verb#ImageQuadTreeGenerator#
+Like many things in the Vision Workbench, an \verb#ImageQuadTreeGenerator#
is templatized on its pixel type. The constructor takes two arguments,
the pathname of the tree to be created on disk and the source image.
You can then use several member functions to configure the quad-tree
@@ -210,10 +216,11 @@ \section{{\tt ImageQuadTreeGenerator}}\label{sec:quadtreegenerator}
tile the entire image without overlap. Finally, the last two values
describe the width and height of the entire source image in pixels.
-This file format is obviously quite arbitrary. In a future release
-we hope to provide hooks to allow the user to specify their own
-meta-data formats, and we may change the default format to something
-more flexible.
+This file format is obviously quite arbitrary, and was designed to
+make it easy to write scripts to manipulate the quadtree data later.
+If you prefer, you can subclass the quadtree generator and overload
+the \verb#write_meta_file()# function to generate metadata of a
+different sort.
\begin{figure}[t]
\centering
View
5 docs/workbook/workbook.tex
@@ -27,15 +27,16 @@
\title{{\Huge The Vision Workbook:}\\A User's Guide to the\\NASA Vision Workbench v1.0}
\author{
Matthew D.~Hancher\\
-Michael Broxton\\
+Michael J.~Broxton\\
Laurence J.~Edwards\\
\\
Intelligent Systems Division\\
NASA Ames Research Center}
\begin{document}
\maketitle
-
+\include{acknowledgements}
+\tableofcontents
\include{introduction}
\include{gettingstarted}
\include{workingwithimages}
View
2 src/vw/Cartography.h
@@ -29,7 +29,7 @@
#define __VW_CARTOGRAPHY_H__
#include <vw/Cartography/Datum.h>
-#include <vw/Cartography/Projection.h>
+//#include <vw/Cartography/Projection.h>
#include <vw/Cartography/GeoReference.h>
#include <vw/Cartography/GeoTransform.h>
#include <vw/Cartography/DiskImageResourceGDAL.h>

0 comments on commit bb6db60

Please sign in to comment.