This repository has been archived by the owner on Jun 15, 2020. It is now read-only.
-
Notifications
You must be signed in to change notification settings - Fork 16
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Merge pull request #8 from nebbles/motion
Motion
- Loading branch information
Showing
25 changed files
with
1,695 additions
and
170 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,4 +1,7 @@ | ||
.idea | ||
*.pyc | ||
*/__pycache__ | ||
|
||
# Ignore build directory in docs dir | ||
/docs/build | ||
/ignore* |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,64 @@ | ||
# | ||
# Benedict Greenberg, March 2018 | ||
import numpy as np | ||
|
||
|
||
class Calibration: | ||
"""Stores the transformation relationship between original and target reference frames. The | ||
class should be instantiated with a minimum of 4 frame->frame calibration points.""" | ||
def __init__(self, frame_a_points, frame_b_points, debug=False): | ||
# Check data passed into class object | ||
if type(frame_a_points) is not np.ndarray or type(frame_b_points) is not np.ndarray: | ||
raise ValueError("Calibration requires data as (n x 3) numpy array.") | ||
if np.shape(frame_a_points)[1] != 3 or np.shape(frame_b_points)[1] != 3: | ||
raise ValueError("Array should have only three dimensions.") | ||
if np.shape(frame_a_points)[0] < 4 or np.shape(frame_b_points)[0] < 4: | ||
raise ValueError("Array should have at least 4 calibration points.") | ||
if np.shape(frame_a_points) != np.shape(frame_b_points): | ||
raise ValueError("Array sizes should match.") | ||
|
||
self.points_from = frame_a_points | ||
self.points_to = frame_b_points | ||
|
||
number_pts = np.shape(frame_a_points)[0] | ||
ones = np.ones((number_pts, 1)) | ||
mat_a = np.column_stack([frame_a_points, ones]) | ||
mat_b = np.column_stack([frame_b_points, ones]) | ||
|
||
self.transformation = np.linalg.lstsq(mat_a, mat_b, rcond=None)[0] | ||
self.transformation_reversed = np.linalg.lstsq(mat_b, mat_a, rcond=None)[0] | ||
if debug: | ||
print(self.transformation) | ||
|
||
def transform(self, coordinate): | ||
"""Transforms an x,y,z coordinate to the target reference frame.""" | ||
if np.shape(coordinate) != (3,): | ||
raise ValueError("Point must be a (3,) vector.") | ||
point = np.append(coordinate, 1) | ||
return np.dot(point, self.transformation) | ||
|
||
def transform_reversed(self, coordinate): | ||
"""Transforms an x,y,z coordinate from the target reference frame back to the original.""" | ||
if np.shape(coordinate) != (3,): | ||
raise ValueError("Point must be a (3,) vector.") | ||
point = np.append(coordinate, 1) | ||
return np.dot(point, self.transformation_reversed) | ||
|
||
|
||
if __name__ == '__main__': | ||
from numpy import genfromtxt | ||
my_data = genfromtxt('cal_data_example.csv', delimiter=',') | ||
|
||
num_pts = 4 | ||
|
||
A = my_data[0:num_pts, 0:3] | ||
B = my_data[0:num_pts, 3:] | ||
|
||
calibrate = Calibration(A, B) | ||
|
||
sample_in = my_data[0, 0:3] | ||
sample_out = calibrate.transform(sample_in) | ||
actual_out = my_data[0, 3:] | ||
print("Pass in coordinates: ", sample_in) | ||
print("Convert coordinates: ", sample_out) | ||
print("Correct coordinates: ", actual_out) |
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,107 @@ | ||
*********** | ||
Calibration | ||
*********** | ||
|
||
To use our reference frame conversion code, use the following command in the terminal when in the directory you want it copied to:: | ||
|
||
svn export https://github.com/nebbles/DE3-ROB1-CHESS/trunk/calibration.py | ||
|
||
Introduction | ||
============ | ||
|
||
.. todo:: Intro to our calibration procedure... | ||
|
||
Procedure | ||
========= | ||
|
||
.. todo:: The procedure as follows... move in contained area, detect markers | ||
|
||
Reference Frames | ||
================ | ||
|
||
Overview | ||
-------- | ||
|
||
To be able to convert camera coordinates, provided by opencv tracking tools and other methods, we need to maintain a relationship between multiple reference frames. The key relationship is that which relates the camera reference frame and the robot base reference frame used by the frankalib controller. This relationship is maintained in a 4-by-4 transformation matrix, and is constructed using the following general formula: | ||
|
||
.. math:: | ||
aX = b | ||
This is modelled on the idea that we can take a coordinate in our main frame (e.g. RGB-D camera provides ``u, v, w`` coordinates) and convert it to the equivalent, corresponding coordinate in the robots reference frame (e.g. ``x, y, z``) so that the robot can move to that point on the camera's view. ``a`` represents our camera coordinate, and ``b`` represents the output of our function, that mulitplies ``a`` with our transformation matrix ``X``, which represents the same point but on the robots reference frame. | ||
|
||
.. todo:: add image of reference frames of both robot, camera, board | ||
|
||
Creating the transformation matrix | ||
---------------------------------- | ||
|
||
To create the transformation matrix, we construct a set of linear equations that we want to solve using a simple least squares algorithm, commonly used in linear regression. This algorithm tries to minimise the sum of squares for each solution to the set of linear equations. | ||
|
||
This set of linear equations is constructed using *calibration points*. These points (usually a minimum of 4) are a set of known, corresponding coordinates in both the cameras reference frame and the robots. These can be automatically sourced with a setup program, or manually. To manually get these points, the robots end effector would be moved to a point in the field of view of the camera, and the robot would report its position (``x, y, z``). The camera would then detect the robot end effector in the field of view and report the location according to its own reference frame (``u, v, w``) and so these two points are the same point (they correspond) but are in different reference frames. We collect a minimum of 4 calibration points, ideally up to 8 or 10 because this will increase the accuracy of our transformation matrix, as there may have been a small error in the values reported by the camera or robot. | ||
|
||
.. todo:: add image of linear regression with caption | ||
|
||
We now have our calibration equation (of *n* calibration points), and we want to solve for the unknowns in the transformation matrix, *X*. | ||
|
||
.. math:: | ||
\begin{bmatrix} | ||
u_i&v_i&w_i\\ | ||
&\vdots&\\ | ||
u_n&v_n&w_n | ||
\end{bmatrix} X = | ||
\begin{bmatrix} | ||
x_i&y_i&z_i\\ | ||
&\vdots&\\ | ||
x_n&y_n&z_n | ||
\end{bmatrix} | ||
Where :math:`m_{ij}` is the unknown in *X*, | ||
|
||
.. math:: | ||
X =\begin{bmatrix} | ||
m_{11}&m_{12}&m_{13}&m_{14}\\ | ||
m_{21}&m_{22}&m_{23}&m_{24}\\ | ||
m_{31}&m_{32}&m_{33}&m_{34}\\ | ||
m_{41}&m_{42}&m_{43}&m_{44} | ||
\end{bmatrix} | ||
In MATLab, the function for solving this equation is simply ``X = a\b``, or less commonly written as ``X = mldivide(a,b)``. `The mldivide() function`_ in MATLab is a complex one, and utilises many different possible algorithms depending on its inputs. To get the similar behaviour in Python, we use `numpy's lstsq function`_ which has similarites and differences which have been discussed `{1}`_ `{2}`_, but ultimately provides us the same functionality of returning a least square solution to the equation. We use the function as in our example below:: | ||
|
||
import numpy as np | ||
from numpy import random | ||
|
||
num_pts = 4 | ||
|
||
A = random.rand(num_pts, 3) | ||
one = np.ones((num_pts, 1)) | ||
A = np.column_stack([A, one]) | ||
print("A", A) | ||
print("\n") | ||
|
||
T = random.rand(3, 4) | ||
xrow = np.array([0,0,0,1]) | ||
T = np.vstack([T, xrow]) | ||
print("T", T) | ||
print("\n") | ||
|
||
B = np.dot(A, T) | ||
print("B", B) | ||
print("\n") | ||
|
||
x = np.linalg.lstsq(A, B, rcond=None)[0] | ||
print("x", x) | ||
|
||
.. _`The mldivide() function`: http://uk.mathworks.com/help/matlab/ref/mldivide.html | ||
.. _`numpy's lstsq function`: https://docs.scipy.org/doc/numpy/reference/generated/numpy.linalg.lstsq.html | ||
|
||
.. _`{1}`: https://stackoverflow.com/questions/33559946/numpy-vs-mldivide-matlab-operator | ||
.. _`{2}`: https://stackoverflow.com/questions/33614378/how-can-i-obtain-the-same-special-solutions-to-underdetermined-linear-systems?noredirect=1&lq=1 | ||
|
||
Implementation | ||
-------------- | ||
|
||
.. automodule:: calibration | ||
:members: | ||
:undoc-members: |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Oops, something went wrong.