Skip to content

Commit

Permalink
drop figures module.
Browse files Browse the repository at this point in the history
drop aplpy, matplotlib, numpy dependencies.
add tutorial ztf-figures.
update changelog.
  • Loading branch information
troyraen committed Feb 11, 2024
1 parent 9594ca4 commit 6e65be0
Show file tree
Hide file tree
Showing 10 changed files with 178 additions and 1,184 deletions.
10 changes: 9 additions & 1 deletion CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,11 +10,20 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/)

## \[Unreleased\]

### Added

- ZTF Figures Tutorial

Check notice on line 15 in CHANGELOG.md

View check run for this annotation

Codacy Production / Codacy Static Code Analysis

CHANGELOG.md#L15

Expected: 1; Actual: 3

### Changed

- update README.md to point to the new docs
- remove setup and requirements files that are no longer needed after switching away from Read The Docs

### Removed

- `figures` module (content moved to tutorial). This allowed the removal of the following explicit

Check notice on line 24 in CHANGELOG.md

View check run for this annotation

Codacy Production / Codacy Static Code Analysis

CHANGELOG.md#L24

Expected: 1; Actual: 3
dependencies: `aplpy`, `matplotlib`, `numpy`.

## \[0.2.0\] - 2023-07-02

### Added
Expand All @@ -33,4 +42,3 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/)
### Fixed

- cleanup some issues flagged by Codacy

5 changes: 0 additions & 5 deletions docs/source/api/figures.rst

This file was deleted.

2 changes: 1 addition & 1 deletion docs/source/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -21,6 +21,7 @@

tutorials/bigquery
tutorials/cloud-storage
tutorials/ztf-figures

.. toctree::
:caption: API Reference
Expand All @@ -29,7 +30,6 @@

api/auth
api/bigquery
api/figures
api/pubsub
api/utils

Expand Down
31 changes: 16 additions & 15 deletions docs/source/tutorials/bigquery.rst
Original file line number Diff line number Diff line change
@@ -1,3 +1,5 @@
.. _bigquery:

BigQuery Tutorial
==================

Expand Down Expand Up @@ -77,13 +79,13 @@ It's options are demonstrated below.
# Option 1: Get a single DataFrame of all results
lcs_df = pittgoogle.bigquery.query_objects(columns, objectIds=objectIds)
lightcurves_df = pittgoogle.bigquery.query_objects(columns, objectIds=objectIds)
# This will execute a dry run and tell you how much data will be processed.
# You will be asked to confirm before proceeding.
# In the future we'll skip this using
dry_run = False
lcs_df.sample(10)
lightcurves_df.sample(10)
# cleaned of duplicates
Congratulations! You've now retrieved your first data from the transient
Expand All @@ -101,8 +103,8 @@ common name in the table schema we looked at earlier, or you can use
fid_names = pittgoogle.utils.ztf_fid_names() # dict
print(fid_names)
lcs_df['filter'] = lcs_df['fid'].map(fid_names)
lcs_df.head()
lightcurves_df['filter'] = lightcurves_df['fid'].map(fid_names)
lightcurves_df.head()
Queries can return large datasets. You may want to use a generator to
step through objects individually, and avoid loading the entire dataset
Expand All @@ -118,9 +120,9 @@ into memory at once. ``query_objects()`` can return one for you:
)
# cleaned of duplicates
for lc_df in objects:
print(f'\nobjectId: {lc_df.objectId}') # objectId in metadata
print(lc_df.sample(5))
for lightcurve_df in objects:
print(f'\nobjectId: {lightcurve_df.objectId}') # objectId in metadata
print(lightcurve_df.sample(5))
Each DataFrame contains data on a single object, and is indexed by
``candid``. The ``objectId`` is in the metadata.
Expand Down Expand Up @@ -156,7 +158,7 @@ results:
for lcjson in jobj:
print(lcjson)
# lc_df = pd.read_json(lcjson) # read back to a df
# lightcurve_df = pd.read_json(lcjson) # read back to a df
Finally, ``query_objects()`` can return the raw query job object that it
gets from its API call using ``google.cloud.bigquery``'s ``query()``
Expand Down Expand Up @@ -184,9 +186,9 @@ method.
# pgb can cast to a DataFrame or json string
# this option also cleans the duplicates
lc_df = pittgoogle.bigquery.format_history_query_results(row=row)
print(f'\nobjectId: {lc_df.objectId}') # objectId in metadata
print(lc_df.head(1))
lightcurve_df = pittgoogle.bigquery.format_history_query_results(row=row)
print(f'\nobjectId: {lightcurve_df.objectId}') # objectId in metadata
print(lightcurve_df.head(1))
lcjson = pittgoogle.bigquery.format_history_query_results(row=row, format='json')
print('\n', lcjson)
Expand All @@ -195,15 +197,14 @@ method.
Plot a lightcurve
^^^^^^^^^^^^^^^^^

The following DataFrame can be used with the code in :ref:`ztf figures` to plot the object's light curves.

.. code:: python
# Get an object's lightcurve DataFrame with the minimum required columns
columns = ['jd','fid','magpsf','sigmapsf','diffmaglim']
objectId = 'ZTF20acqgklx'
lc_df = pittgoogle.bigquery.query_objects(columns, objectIds=[objectId], dry_run=False)
# make the plot
pittgoogle.figures.plot_lightcurve(lc_df, objectId=objectId)
lightcurve_df = pittgoogle.bigquery.query_objects(columns, objectIds=[objectId], dry_run=False)
Cone search
~~~~~~~~~~~
Expand Down
29 changes: 9 additions & 20 deletions docs/source/tutorials/cloud-storage.rst
Original file line number Diff line number Diff line change
@@ -1,3 +1,5 @@
.. _cloud storage:

Cloud Storage Tutorial
==============================

Expand Down Expand Up @@ -65,13 +67,10 @@ Download alerts for a given ``objectId``
blob.download_to_filename(local_path)
print(f'Downloaded {local_path}')
Plot cutouts and lightcurves
Open a file
~~~~~~~~~~~~~~~~~~~~~~~~~~~~

The functions in this section were adapted from
https://github.com/ZwickyTransientFacility/ztf-avro-alert/blob/master/notebooks/Filtering\_alerts.ipynb.

Open a file (see the previous section to download files)
Load to a dict:

.. code:: python
Expand All @@ -84,27 +83,17 @@ Open a file (see the previous section to download files)
print(alert_dict.keys())
Plot cutouts
Load to a pandas DataFrame:

.. code:: python
pittgoogle.figures.plot_cutouts(alert_dict)
plt.show(block=False)
lightcurve_df = pittgoogle.utils.Cast.alert_dict_to_dataframe(alert_dict)
Cast to a dataframe and plot lightcurves

.. code:: python
lc_df = pittgoogle.utils.Cast.alert_dict_to_dataframe(alert_dict)
pittgoogle.figures.plot_lightcurve(lc_df)
plt.show(block=False)
Plot everything together

.. code:: python
Plot light curves and cutouts
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

pittgoogle.figures.plot_lightcurve_cutouts(alert_dict)
plt.show(block=False)
See :ref:`ztf figures`

Command line
------------
Expand Down
137 changes: 137 additions & 0 deletions docs/source/tutorials/ztf-figures.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,137 @@
.. _ztf figures:

ZTF Figures Tutorial
==============================

.. contents:: Table of Contents
:depth: 1
:local:

This tutorial demonstrates plotting ZTF cutouts and light curves.
It is based heavily on https://github.com/ZwickyTransientFacility/ztf-avro-alert/blob/master/notebooks/Filtering_alerts.ipynb.

Prerequisites
-------------

1. Load a ZTF alert to a dict or a pandas DataFrame. For examples, see:

- :ref:`cloud storage`
- :ref:`bigquery`

Imports
---------

.. code:: python
import gzip
import io
from typing import Optional
import aplpy
import matplotlib as mpl
import numpy as np
import pandas as pd
from astropy.io import fits
from astropy.time import Time
from matplotlib import pyplot as plt
import pittgoogle
Plot a Light Curve
------------------

.. code:: python
def plot_lightcurve(lightcurve_df: pd.DataFrame, days_ago: bool = True):
"""Plot the per-band light curve of a single ZTF object.
Adapted from:
https://github.com/ZwickyTransientFacility/ztf-avro-alert/blob/master/notebooks/Filtering_alerts.ipynb
Parameters
----------
lightcurve_df
Lightcurve history of a ZTF object. Must contain columns
['jd','fid','magpsf','sigmapsf','diffmaglim']
days_ago
If True, x-axis will be number of days in the past.
Else x-axis will be Julian date.
"""
filter_code = pittgoogle.utils.ztf_fid_names() # dict
filter_color = {1: "green", 2: "red", 3: "pink"}
# set the x-axis (time) details
if days_ago:
now = Time.now().jd
t = lightcurve_df.jd - now
xlabel = "Days Ago"
else:
t = lightcurve_df.jd
xlabel = "Time (JD)"
# plot lightcurves by band
for fid, color in filter_color.items():
# plot detections in this filter:
w = (lightcurve_df.fid == fid) & ~lightcurve_df.magpsf.isnull()
if np.sum(w):
label = f"{fid}: {filter_code[fid]}"
kwargs = {"fmt": ".", "color": color, "label": label}
plt.errorbar(t[w], lightcurve_df.loc[w, "magpsf"], lightcurve_df.loc[w, "sigmapsf"], **kwargs)
# plot nondetections in this filter
wnodet = (lightcurve_df.fid == fid) & lightcurve_df.magpsf.isnull()
if np.sum(wnodet):
plt.scatter(
t[wnodet],
lightcurve_df.loc[wnodet, "diffmaglim"],
marker="v",
color=color,
alpha=0.25,
)
plt.gca().invert_yaxis()
plt.xlabel(xlabel)
plt.ylabel("Magnitude")
plt.legend()
.. code:: python
plot_lightcurve(lightcurve_df)
Plot Cutouts
------------

.. code:: python
def plot_stamp(stamp, fig=None, subplot=None, **kwargs):
"""Adapted from:
https://github.com/ZwickyTransientFacility/ztf-avro-alert/blob/master/notebooks/Filtering_alerts.ipynb
"""
with gzip.open(io.BytesIO(stamp), "rb") as f:
with fits.open(io.BytesIO(f.read())) as hdul:
if fig is None:
fig = plt.figure(figsize=(4, 4))
if subplot is None:
subplot = (1, 1, 1)
ffig = aplpy.FITSFigure(hdul[0], figure=fig, subplot=subplot, **kwargs)
ffig.show_grayscale(stretch="arcsinh")
return ffig
def plot_cutouts(alert_dict):
"""Adapted from:
https://github.com/ZwickyTransientFacility/ztf-avro-alert/blob/master/notebooks/Filtering_alerts.ipynb
"""
# fig, axes = plt.subplots(1,3, figsize=(12,4))
fig = plt.figure(figsize=(12, 4))
for i, cutout in enumerate(["Science", "Template", "Difference"]):
stamp = alert_dict["cutout{}".format(cutout)]["stampData"]
ffig = plot_stamp(stamp, fig=fig, subplot=(1, 3, i + 1))
ffig.set_title(cutout)
.. code:: python
plot_cutouts(alert_dict)
plt.show(block=False)
2 changes: 1 addition & 1 deletion pittgoogle/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@
except ImportError: # for Python<3.8
import importlib_metadata as metadata

from . import auth, bigquery, exceptions, figures, pubsub, utils
from . import auth, bigquery, exceptions, pubsub, utils

__version__ = metadata.version("pittgoogle-client")

Expand Down

0 comments on commit 6e65be0

Please sign in to comment.