Skip to content

Open Live Stacker Manual

Artyom Beilis edited this page Apr 27, 2024 · 51 revisions

Open Live Stacker Manual

OpenLiveStacker is an application for Electronically Assisted Astronomy (EAA) that uses an external camera for imaging and performs live stacking. It runs on Android and in Linux environments (including Linux Subsystem for Windows).

Starting

Installing OLS on Android

One option is to install as any Android app from Google Play, or simply search "OpenLiveStacker" (without spaces) in play store.

If you want to have most up-to-date version you can install APK directly from Releases section

image

Starting OLS

Open the application, you'll be asked for Geolocation that is useful for various astronomical computations.

Connect the camera to the device.

Note: Connecting the camera to your phone or tablet may require a USB to type-C or to micro-usb adaptor, depending on the device model. The Android device must support OTG and OTG needs to be enabled under settings.

Note: Some cameras may need external power to be provided. For example SVBony's SV105 camera cable comes with an additional USB power input.

image

There are following items on the screen:

  • Start application using different drivers:
    • USB Video Class (UVC) Driver - standard protocol used for web-cameras and cameras like SVBony SV105.
    • Start application using ASI ZWO camera driver
    • Start application using ToupTek camera driver
    • Start application using Gphoto2 camera driver (DSLR/Mirrorless)
    • Start application using internal Android camera driver (Internal)
    • Start application using simulation video of M44 captured with telescope with 400mm focal length and 3.75um pixel size very useful for testing and demonstration purposes.
  • Option: use internal phone storage or externally mounted SD card
  • Option: use UI in landscape more regardless of the device orientation
  • Option: open the UI in external browser rather than internal webview
  • Information about the location of the saved data

Start the application and on one of camera drivers - you'll be asked for permission to access the camera (depending on your devices) and then the main UI will open.

All the data is saved under Android/media/org.openlivestacker/files/OpenLiveStacker

Note: OpenLiveStacker uses a web interface to show all controls. You can check "Use External Browser", a browser with the address http://127.0.0.1:8080 will be opened instead of the app. It is still the same OLS interface.

Here a screenshot of a live stacking session:

image

To finish you just press the Android back button and then press "Close and Exit"

You can always return to the main UI by pressing "Live View"

Camera Controls

Once the main UI is open you are will be offered to open the camera from the dropdown list. After you open the camera you'll have access to configuration and other settings.

image

Once the camera is open you can configure the camera:

image

Now you can configure camera settings. The simplest option is to select stream format and start a video stream by pressing stream

Stream format

It consists of data format, resolution and optional binning

First is the data format:

  • UVC cameras usually support two formats: mjpeg and yuv2 - mjpeg is JPEG compressed format that gives higher frame-rate while yuv2 is uncompressed format that better preserves original image quality and it is recommended to use for stacking.
  • ASI cameras support following formats:
    • raw16 - the full depth 16 bit raw bayer pattern
    • mono16 - the full depth 16 bit raw mono image.
    • rgb24 - 8bit format that is converted to RGB at camera level
    • raw8 - 8 bit bayer pattern
    • mono8 - 8 bit mono image
  • ToupTek cameras support raw16, mono16 and rgb24

It is recommended to use raw16 or mono16 format according to camera type to achieve the highest dynamic range.

Next is resolution. For example 1920x1080.

Note: for UVC cameras smaller resolution means downscaling, for ASI and ToupTek it means a smaller ROI marked with what part of ROI is given.

The final option is binning (.e.g bin2) for cameras that support binning. For example:

  • mjpeg:1920x1080 - full frame for SVBony sv105, mjpeg
  • yuv2:640x480 - downscale frame for SVBony sv105, non-compressed data
  • raw16:1304x976 - full frame for for ASI224MC
  • raw16:652x488:roi=1/2 - small ROI for ASI224MC
  • raw16:652x488:bin2 - full frame for ASI224MC with x2 binning

Camera Settings

There are two common configurations for all cameras:

  1. FPS limit - the maximal frame rate for the application. Note: some cameras can reach 100 and 200 frames per second. Such loads aren't something handheld device can deal with it. Thus FPS limit is defined to prevent too much processing load on the device
  2. "Auto Str." - auto stretch control that does automatic stretching of live images. It is applied as post processing for live stream and does not affect stacking.

There is a further list of controls which are specific to different cameras.

image

If a parameter has an automatic setting, such as Auto Exposure or Auto While Balance, then you cannot set the value manually until you "uncheeck" the checkbox widget. Once the checkbox is unchecked, you can change numerical parameters by updating the value and pressing "Enter" or "set".

Note: Some parameters are informational only. You cannot change the temperature of the camera by clicking the value on OLS ;)

Camera Profiles

Once you have configured your camera and your dark frames as desired you can save the current configuration profile by opening Settings -> Profiles -> New

image

You can quickly switch between various camera/darks configurations by selecting one of the saved profiles.

image

This allows you to both restore camera configurations to normal after experimenting with a new setting and to switch between profiles for different object types or scenarios.

Stacking

Once video is streaming the following controls are available:

image

  • Camera and application settings
  • Stacking controls and settings
  • Plate solving

Stacking Configuration

Once you open the stacking menu you'll see three methods: Deep Space Object, Planetary Calibration

image

There are two common options for different methods:

  • "Save All Frames" - Save all input images so you can do processing with offline tools
  • "Delay" - Set a delay before starting the stacking operation. This is in case your phone is placed on the telescope itself and pressing it will cause the telescope to shift for a few seconds.

Once you have configured your stacking settings - press the "Stack" button to go to Live Stacking Mode

Calibration Frames Stacking

When you prepare your calibration frames you need to provide a name so you know which frames you are stacking. Tip: If you include the general parameters of the frame in the name, for example darks_2s_gain200, it will be easier to know when and how to use it.

image

OLS supports Dark, Flat, and Dark-Flat types of frames.

  • Darks (should have same exposure, gain and other camera settings)

  • Flats - calibration of image non-uniformity

  • Dark-Flats - same configuration as flats but with closed lid.

  • Darks - Correct the background noise of sensor for a given configuration (hot pixels, pattern noise, amp glow, etc). it is done with same exposure, gain, and other parameters that you are going to capture images with. It is temperature sensitive - so for cooled cameras make sure you use target temperature. For non-cooled cameras you may need to re-calibrate (retake) when the environment changes. You capture them with camera lid on the objective or even just with camera with closed dust cap. It is an additive correction.
  • Flats - Corrects for optical path imperfections like vignetting or dust shadows. It is a multiplicative correction which depends on the optical configuration. You collect it by using a short exposure - cover the objective with some diffusing material or put a bright screen over the objective. When you capture make sure you obtain non-saturated image somewhat dark gray image. Remember that it depends on your entire optical configuration, so even rotating the camera may require re-calibration.
  • Dark-Flats - These are Darks which correct the Flats themselves. These are taken with same exposure/gain conditions as flats but with the camera lid on.

How correction works:

corrected_flats = flats - dark_flats
correcting_factor  = max(corrected_flats) / corrected_flats
corrected_image = max(image - darks,0) * correcting_factor

Flats calibration using the screen of you tablet/smartphone

In order to calibrate flats you need a uniform surface of light and the tablet's screen can be one. So you can use the tablet's or the phone's screen you are running OLS on as such a flat surface.

  • Set Delay for a few seconds - that you can sure you can put the screen on the top of the telescope objective
  • Set white screen for the number of seconds you want to collect flats for.

When you press "Star" the screen will turn white and you can put it quickly in front of the scope. After "Delay" seconds the collection will start and will pause automatically after "white screen seconds" - there will be more Delay seconds white screen. It would return to normal after that when stacking is paused and flat image can be seen and recaptured if needed.

So for example if you set Delay for 5 seconds and white screen for 3 seconds than overall you have 5+3+5 seconds of white screen of which 3 second of data will be collected. Press save after completion.

Deep Space Objects Stacking

image

Lets go over parameters:

General:

Object Id - the code of the object like M31 or NGC457. It also sets the RA/DE coordinates of the object. It is very important for Alt-Az stacking, optional.

Extra Name - name prefix that is given to an object you are stacking, not mandatory but helps identifying an object you are stacking. Note if you specify Object ID you can skip on it since object id is added to the name as well

Field Rotation Compensation Settings.

If you are using alt-az mount you'll need to handle field rotation. OLS uses very fast and efficient Fast Furrier Transform registration algorithms. However provides only shift information and does not measure rotation. However field rotation is something that can easily and accurately can can be calculated. And as a good rule, if you know something in advance - use it.

So in order to enable rotation compensation on Alt-AZ mounts:

  • Select the "Derotate" checkbox
  • If you are using Newtonian scope or mirror diagonal that mirrors the image - select "Derotate Mirror" - since such optical setup actually changes the direction of the rotation
  • Object Id or known RA/DE coordinates must be provided
  • Geolocation Lat/Lon should be provided. If not configured automatically under Settings - Generarl you can always enter them manually there.

Calibration Frames:

Darks, Flats and Dark Flats - you can select ones that you prepared before from the drop down list. Note: to use dark flats you need to specify flats for obvious reason.

Misc parameters.

  • Exposure Mpl - synthetic exposure multiplier when several adjacent images are added as if they were single exposure - without registration. It is very useful when maximal camera exposure is too limited - this improves signal for stacking and reduces the overall load on the system.
  • Remove Gradient - apply automatic calculation of linear gradient for source images. It helps when due to light pollution or light sources there are noticeable gradients in FOV.
  • Remove Satellites - remove satellite trails - basically it removes brightest pixel from the stack, so if for example use collected 10 frames only 9 out of 10 data points per pixels are used for an average and maximal is thrown away
  • Save All Frames - saves all input light frames in stack under debug directory so they can be used either for debugging or offline post processing.
  • Delay (s) - set a delay of N seconds. It is useful when the smartphone or tablet are connected directly to the scope or mount. This activates the delay of N seconds before stack stars, additionally when pause is pressed last frame is discarded to prevent motion blur entering the stack.

Planetary stacking

In generally it consists of very basic pre-processing, lucky imaging using filters and post processing that improves details using deconvolution and sharpening

Filters

Version 29 introduced image quality filters support. There are 3 filter types

image

  • Sharpness Percentile - keep specific % of most sharp frames. Useful for general removal of smeared images and lucky imaging
  • Reg. Quality - registration quality score. keep % of frames with best registration - score - useful for solar/moon imaging when there is a significant wind leading to rolling shutter related deformations
  • Brightness Std - track average image brightness, if the current image brightness isn't withing specified N of standard deviations remove it. Useful to handle accidental clouds in frame.
  • Min Stat Size - collect statistics over 1st N images before filters are applied
  • Drop First Frames - use first "Min Stat Size" frame only to collect statistics - only available for planetary stacking

Live Stacking Controls

General Controls

There are several live stacking controls:

Top Left corner:

image

  • Settings - general settings

  • Plate Solving - run plate solving on stacked image

  • Pause/Resume - pause and resume stacking - useful for non-tracking mount where you correct the position and restart stacking, or pause stacking and review it without adding more images

  • Stop stacking and return to live view. Note: if there are more than 60s of stacked data that hadn't been saved you'll be prompted to confirm stopping.

  • Save result, each time you press save a new version is saved

    Note: there are two images are saved

    • stacked/stacking_id_stacked_vN.jpeg - the image as you see. N is the version
    • stacked/stacking_id_stacked_vN.tiff - stacked 16 bit tiff but without white balance and stretch - for offline post-processing.
  • Stretch configuration that handles manual/auto stretch

Top right corner:

image

It is live video preview. It opens a 4 times downscaled live image in top right corner - it is useful when manual tracking is used to see the actual position camera looks at, or to see any anomalies during stacking session.

Bottom right

image

Statistics is shown as following: Stacked Images / Failed due to registration / Skipped due to overload.

  • Failure due to registration may occur of the frame drifted too far or too fast in comparison to previous frames
  • If the framerate is high, there may be frames skipped due to overload - the app can't handle the stacking fast enough so frames are dropped

Stretch Controls

By default OLS does auto-stretching, however sometimes using manual controls you can get better results. By unchecking auto controls you open manual stretch controls.

There are 3 stretch controls with histogram:

image

  • Left 0 - the line that represents black on the histogram
  • Right 1 - the line that represents white on the histogram
  • Middle 1/2 - the line that represents middle tones - it allows non-linear stretching of the data - gamma correction. Internally OpenLiveStacker uses asinh curve asinh(ax)/asinh(a)

By dragging these lines you modify the stretch. When you tap/click and move the line you select a new position, with the release the stretch is modified.

There are 5 types of zoom on the histogram to allow precise control (5 buttons on the bottom):

  • vis - visible - show black and white lines on the left and the right histogram is moved upon stretch change - default
  • all - show all the histogram - full dynamic range - the lines are moved upon update
  • 0, 1/2, 1 - zoom around black, mid and white range for precise adjustment

When "Auto" is selected these values are defined automatically, when switching to manual mode, these values would be the ones that auto-stretching have calculated before.

Planetary Stacking Controls

Planetary controls are opened with pressing on "Saturn" icon.

There are controls for Richardson-Lucy deconvolution and unsharp mask filters:

For deconvolution there are Sigma - the point spread function size in pixels - basically how "blurred" the stacked image. The more image is blurred the bigger sigma should be. Setting it to 0 disables process. Number of iterations defines how hard the algorithm tries to improve the image. Note higher values make longer processing times.

Unsharp mask filter has two basic parameters: Sigma and strength - the control how strongly edges are sharpened.

image

Plate Solving

OpenLiveStacker provides integration with ASTAP plate solver that can help navigating when the goto or manual mount aren't accurate enough to bring the expected target directly to camera field of view.

Setting up plate solver on Andoid

Also open live stacker APK contains astap_cli, it does not contain the database itself. You need to download the plate solving database on your device to make it useful.

Go to configuration menu and press on ASTAP button. Select a suitable database and click download. It can take some time depending on your communication speed. Consider using Wifi for large downloads.

Using plate solver

In order to use plate solver you need to point the telescope to the close direction of the target you are looking for. Since most of tools aren't perfect there is high chance that you may be little bit off. It also may be the case that you don't even know if you are pointing to correct location since the object is very dim and is invisible in non-stacked frames.

Once the telescope is pointing to a location in close proximity to the target open Plate Solving menu and configure it.

  • Vertical field of view in degrees. It can be either inserted manually or automatically from pixel size and focal length (updating last to modifies FOV value)
  • RA and DEC coordinates of object. You can also enter object identification and if it is present in the database RA and DE will be filled automatically.
  • Search radius in degrees around the target

image

Once you press on Solve button astap started and if it succeeds an image with an arrow pointing to the target (maybe outside FOV) and the distance to target in degrees and correction that need to be done in either Alt/Az mode or polar RA/DE

image

If solving fails a error message will be shown. You can press "Restart" button and try solving again after adjusting.

Camera Specific Notes:

DSLR/GPhoto2

In order to use your DSLR with OpenLiveStacker it need to be supported by libgphoto2 and support trigger capture.

How to use:

  • Put the camera in manual mode, define saving format - you can save in RAW or JPEG (but not both) and select the stream you want to see.
  • Once you start the stream, OLS triggers image capture, downloads it and deletes it and continues.
  • If you camera supports viewfinder option you can select it to prevent the mirror from moving.
  • If you want to preserve images on SDCard select appropriate target (if supported) and disable removing files
  • Note - most of controls done via camera, so for example, if you change major configurations in camera that affect for example output format - you need to restart the application

Warnings:

  • DSLR cameras tend to have very large raw images like 24MP is typical. These can pose significant challenge for Android tablets that don't have large amount of memory. It is recommended to use binning or small image size for live stacking to prevent memory overload.
  • Android can automatically suggest opening the camera as external storage device - don't allow. It would prevent from OLS accessing the camera since it is going to be in use by another application and you'll get failure messages from OLS

Android Internal camera

  • Android cameras support JPEG format and YUV. So in the OpenLiveStacker you'll see jpeg and mono formats - mono is the "Y" component of YUV
  • Some cameras support RAW16 format - it is preferred format
  • There is a simulated mono16 that is achieved by downscaling of mono8 full frame without loosing accuracy
  • Zoom control is effective for Jpeg and Mono formats. To change zoom for raw format - select appropriate ROI, for example roi=2/3 or roi=1/3
  • Make sure you use zoom/roi to prevent FOV circle appear since it may cause registration to fail due to attempting to register on eyepiece FOV circle rather than on actual object.
  • Since many Android cameras have very limited maximal exposure - the signal may be too weak and registration may fail. Use "Exposure Mpl" option in stacking menu - synthetic exposure multiplier when several adjacent images are added as if they were single exposure.

Warning: it is better not to use full camera resolution, use smaller image size like 1920x1080 or 1280x720 for jpeg/mono and use binning and roi for raw images. These cameras usually don't support long exposures and they can easily overload computation pipeline of handheld device.

Saved data

The data is saved into data directory and on Android to Android/media/org.openlivestacker/files/OpenLiveStacker. It contains following

  • stacked - list of stacked results, jpeg after stretching and 16bit tiff before stretching and white balance.

    For example:

      data/stacked/Orion_Nebula_m42_20230313_140554_stacked.jpeg
      data/stacked/Orion_Nebula_m42_20230313_140554_stacked.tiff
    

    Where first part is object name if given, catalogue id if provided, and finally the collection date and time.

  • calibration - data of calibration frames - floating point tiff files and their index for application

  • debug - raw frames collected during stacking, either jpeg or tiffs

  • db - ASTAP database

If "Use External SD Card" is checked an SD card is going to be used to store all the data.

Images

Developer's RIG - AZ GTi, 60mm archomat, ASI ZWO 224MC and Android Tablet

rig