Skip to content

Commit

Permalink
Merge branch 'adi' into master
Browse files Browse the repository at this point in the history
  • Loading branch information
Addalin committed Jan 28, 2020
2 parents 9cda91d + c479690 commit c1baaf8
Show file tree
Hide file tree
Showing 3 changed files with 20 additions and 13 deletions.
2 changes: 1 addition & 1 deletion CameraNetwork/controller.py
Original file line number Diff line number Diff line change
Expand Up @@ -1396,7 +1396,7 @@ def preprocess_array(
correct_radiometric=True,
ignore_date_extrinsic=False
):
"""Apply preprocessing to the raw array:
"""Apply pre-processing to the raw array:
dark_image subtraction, normalization, vignetting, HDR...
Args:
Expand Down
22 changes: 13 additions & 9 deletions CameraNetwork/utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -606,15 +606,19 @@ def f(angles, calculated_directions, measured_directions):

def find_camera_orientation(calculated_directions, measured_directions):
"""
Find the rotation of the camera based on the coordinates of a celestail object
The input is two sets. The first is x,y image coordinates of the object (taken
from several images). The second set is the azimuth/altitude coordinates of the
celestial object (in Horizontal coordinate system).
The output is the rotation matrix of the camera. The rotation matrix is converts
between world coordinates to camera coordinates, where the world coordinates
are centered in camera, z is in the zenith and the x-axis points to the North.
The inner calibration of the camera is given as a function that converts
between the image coordinates and the camera coordinates.
Find the rotation of the camera based on the coordinates of a celestial object.
The input includes two sets:
1. `calculated_directions` - is x,y image coordinates of the object (taken from several images).
2. `measured_directions` - is the azimuth/altitude coordinates of the celestial object (in Horizontal coordinate system).
The output is the rotation matrix of the camera.
The rotation matrix, converts between world coordinates to camera coordinates,
where the world coordinates are centered in camera, z is in the zenith and the x-axis
points to the North.
The inner calibration of the camera is given as a function that
converts between the image coordinates and the camera coordinates.
Args:
calculated_directions (array like): The reference direction of the celestial
Expand Down
9 changes: 6 additions & 3 deletions Image_pipeline.md
Original file line number Diff line number Diff line change
Expand Up @@ -68,7 +68,7 @@ Which of the objects is being used in the final mask calculation of the image?

3. ***TODO*** Find a way to calculate the ROI automatically based on the grid projection?
### 2. Extrinsic calibration
This proccess is done according to sun position and sunshader.
This process is done according to sun position and sunshader.
The process should apply for a sunny day having clear sky, and requires two steps:
1. [handle_sunshader_scan()](https://github.com/Addalin/cameranetwork/blob/4f6a0b01111725799e6796dbf206f624a99c231b/CameraNetwork/server.py#L1066-L1088)

Expand Down Expand Up @@ -106,6 +106,7 @@ Graphical Result:

![extrinsic](docs/source/images/snapshot_extrinsic.png)

- See a mathematical description, see p.49-50 in Amit's thesis, section: "Self-Calibration in a Camera Network".

### 3. Radiometric calibration:
To perform radiometric calibration with a sunphotometer,
Expand Down Expand Up @@ -144,11 +145,13 @@ For example:

### 4. 3D grid and space curving:
The [geographic coordinate systems](https://en.wikipedia.org/wiki/Geographic_coordinate_system) that are used here are:
1. The ECEF (earth-centered, earth-fixed frame) is the common 3D grid that is being used for moving the point-of-view (the observing camera) around the grid conveniently according to cameras' locations.
1. The ECEF (earth-centered, earth-fixed frame) is the common 3D grid that is being used for moving the point-of-view (the observing camera) around the grid conveniently according to cameras' location (Latitude (\phi), longitude (\lambda),X_ecef,Y_ecef,Z_ecef).
2. The NED (North East Down) grid (X,Y,Z) is used for visualization and reconstruction grid.

See their definitions in the project [here](https://github.com/Addalin/cameranetwork/blob/c69dda2adc041dc2dc98660b34e57769213f23a9/CameraNetwork/gui/main.py#L1393-L1420).

image of the relation between both coordinates systems: https://en.wikipedia.org/wiki/Geographic_coordinate_system#/media/File:ECEF_ENU_Longitude_Latitude_relationships.svg
// TODO : add the image to here.

There are several conversion processes that are being done:

1. [ProjectGrid()](https://github.com/Addalin/cameranetwork/blob/fa7d2b2f29d5217cdc2b216ae55d147393e9db0d/CameraNetwork/image_utils.py#L615-L645) - Projecting the 3D grid of the interest volume, onto image plane. Which uses ecef2ned in [projectECEF()](https://github.com/Addalin/cameranetwork/blob/c69dda2adc041dc2dc98660b34e57769213f23a9/CameraNetwork/gui/main.py#L881-L933).
Expand Down

0 comments on commit c1baaf8

Please sign in to comment.