Skip to content

GSoC 2014 Student Applications Rohit Patil:Work On Photometry Package

Rohit Patil edited this page Mar 20, 2014 · 3 revisions

Student Information

Name: Rohit Patil

Email: rohit4change@yahoo.in

Telephone: +91 7719079147

Time Zone: India Time Zone (UTC+5:30)

GitHub Username: QuanTakeuchi

Skype: rohit.patil.92

Blog: http://quantakeuchi.wordpress.com/

University Information

University: Indian Institute of Space Science and Technology, Kerala, India

Major: Physical Sciences

Current Year and Expected Graduation date: Fourth year, Graduating by July 2014

Degree: B.Tech

Programming Experience

Programming Languages known: Python, C, (a bit of C++)

Platform used: Linux

Editor: I prefer Emacs. I find emacs perfect for anything you want to do where you need to type. Apart from programming the auctex package made writing latex documents also so much convenient for me. Therefore the rich editing features that this editor provides, can be used for almost everything. The best feature I like about Emacs is its dired mode.

Mainly used programming for academic purposes. Some of the works done using C are as follows:

  • Making numerical models in atmospheric sciences
  • Solving various mathematical and physics problems (differential equation, Particle in a box, etc.)
  • Basic image manipulation (like image enhancement, rotation) on raw satellite images.
  • Projecting Satellite path

Works done using python:

  • Created pyraf scripts to perform some tedious tasks using iraf (Method of template subtraction on fits images)

- Made scripts to calibrate a source with the help of landolt field stars. I started using python very recently (6 months back). But from the past 3 months I have been using it extensively in my project work. Apart from the one mentioned above I haven't done much useful projects. Just been using it to solve programming puzzles on sites like codechef and project euler. But have pretty good knowledge about the most common packages used. The thing I liked most about Python is its conciseness. List comprehension, returning more than two values from a function, creation of anonymous functions, and lot more stuffs like map() have made programming a lot more easier in python.

I had no prior experience of git and cython though. Made an account on github recently. But I have learnt to do a few things by now, like cloning a repository and making pull requests. And I will learn about cython as well before the project starts.

Project Proposal Information

** Title: Work on improving the photometry package **

Abstract

Photometry is a technique in astronomy concerned with measuring the brightness of an astronomical object's electromagnetic radiation. Whatever your area of research in astronomy you will need to find the brightness of an object from time to time. There are a lot of tools already available to perform photometry. The most popular one is the Stetson's DAOPHOT package. All these packages already have a lot of features. So a question may arise, why to create a new package to do the same thing when there are already well established packages in circulation? There are many reason for this, but the most important one I find is that, most of these packages just perform a limited number of operations. And once we obtain the results of the photometry we do have to switch to some programming language for performing further analysis (for e.g. using the magnitudes to plot a light curve, then find its slope, etc. Which is not possible to do in the conventional photometry tools available). But if we have a python photometry package, it would be far more convenient. With a vast set of libraries and a powerful scripting language like python, we could play around with the photometry results as we want. This is a strong motivation behind my choosing this project as well.

The work of creating such a package has already started in astropy's photutils. The aim of my project would be to improve its functionality, add some more features, and take this package one step ahead in becoming a full fledged photometry tool.

Detailed Description

The current development of the photutils is reached to the stage where user can do aperture photometry. PSF photometry has also been implemented, but is currently under review. I would begin my project by improving the documentation of the aperture photometry. Currently the examples given in the documentation are very simple. Those examples demonstrate doing photometry on a 2d array with '1' stored in each data cell. This does give a basic idea to the user about how to proceed with the photometry. But the photometry is usually done on files with certain format, for e.g. fits. In the end you do get an array from the fits image with the counts in each pixel stored in the corresponding cell of the array. But including examples which specify exactly how this is done using other astropy packages like io.fits would be more helpful. So I would work on improving the documentation, along with including real-life examples.

Next I would work on including certain crucial features. Some of them are as follows:

  • Centering coordinates: Currently the aperture photometry task takes in the coordinates from the user, makes apertures around these coordinates and counts the flux. But the user may not be able to determine the exact centre of the source. So we can implement a centering algorithm, which finds and determines the centre of the source and then performs photometry. Alternatively psf photometry can also help us determine the centre. So an option can be added to the psf photometry function to do this.
  • Integration with NDData: NDData are like numpy arrays, but they can store additional metadata like uncertainties, masks, units, coordinate system, etc. Therefore it would be good if we could pass NDData objects to the photometry routines. But there should also be an option to do the photometry without using these, so the user can specify the uncertainties, wcs, etc. in the photometry function itself.
  • Support for non-pixel coordinates:If the above is implemented, then a problem can arise. If the user gives World coordinates as input, the pixel location corresponding to these points may lie off the image. Therefore our program should take this into account and perform proper actions, like returning NaN for those pixels or masking them.
  • Compute statistics for aperture photometry: There should be an option to do either the sum of the fluxes, or mean, or median inside the aperture.
  • Generating a Curve of Growth: if the user has done photometry using multiple apertures, it would be a good idea to plot a graph of flux vs apertures radius. This gives an idea of how far the source extends and where sky begins, as well as what final flux value should be associated with the source being analyzed. Plus we can implement a function to convert the flux to magnitudes.

With time and discussions with the mentors, a few more things could be added. This being done, we can then start checking the accuracy of the results obtained from our photometry with the ones obtained by other tools like Daophot, Sextractor, Aperture Photometry Tool, etc. This would be done on real data. The results would be checked thoroughly, and if they have large deviations, possible reason for the error would be investigated and implemented.

The basic work on PSF photometry is also done. But the documentation is not available for it yet. It is very crucial to have the documentation ready before this could be released. So I would work next on understand the working of PSF photometry and simultaneously creating the documentation. I will try to include real-life examples in the beginning itself, so that there will not be a need to change the documentation in the future.

After this is done, I will spend some time reviewing the work done up till now. If the mentors are okay with the progress being done, then I would start working on implementing another feature, which is, source detection. A lot of secondary things would be required to develop this. Calculation of full width at half maxima (FWHM), Estimating the sky level and its variance, Ways to find the sharpness and roundness of the source, etc. Most probably the time period alloted would have been finished by now. But still I would continue to work on it even after the submissions, and see to it that these things get finished.

Milestones:

May 8 -- May 18 (1.5 weeks)

Familiarize myself with the astropy team and the technical stuffs like coding conventions, github, etc. Read the documentation, especially the aperture photometry part. Research for the improvements that can be made.

May 19 -- May 28 (1.5 weeks)

Work on improving the documentation of aperture photometry. Read the documentation of other photometry tools and check about all the stuffs that the current documentation lacks. Include some real-life examples for each tasks.

May 29 -- June 11 (2 weeks)

Start the actual work by implementing simple stuffs in the API. For example, making plots of Flux vs error; writing photometry results to different files; Centering the coordinates; and other similar stuffs to get the hang of the work.

June 12 -- June 25 (2 weeks)

Work on the psf related tasks. Add an optional argument to psf photometry to perform coordinate tuning; add a function to add and subtract psfs from an image to perform psf photometry iteratively; add an option to return residual image after psf photometry.

June 23 -- June 27 (1 week)

Mid-term Evaluation

June 28 -- July 7 (1.5 weeks)

Start work on using NDData as input to photometry routines. Also implement a way to input the uncertainties, wcs, masks, etc without using NDData. Also implement the growth curve feature in aperture photometry.

July 8 -- July 25 (2.5 weeks)

By now I would have pretty good idea about the psf functionality as well. So I will be ready to start the documentation work of the same. Complete the documentation. Add the option to compute different statistics in the aperture photometry task as well.

July 26 -- August 1 (1 week)

Begin the testing of the accuracy of the current implementation with that of other tools. If inaccurate, search for possible reason for the errors.

August 2 -- August 11 (1.5 week)

Buffer Period. If everything went well till now, start working on the source detection feature to be added.

August 12 -- August 17

Make final tests. Check for bugs and issues. Update the documentation for changes made.

Other Schedule Information

Till May 7th I will be busy with my project presentation and viva-voce of my final semester. After that I am completely free, with no other commitments. There is just a small possibility that on 24th-26th we will be having our job counselling. But thats just for maximum 3 days.

Clone this wiki locally