The Code (API)

pacman

s00_table

class pacman.s00_table.MetaClass[source]

Bases: object

A Class which will contain all the metadata of the analysis.

pacman.s00_table.run00(eventlabel: str, pcf_path: Optional[Path] = PosixPath('/home/docs/checkouts/readthedocs.org/user_builds/pacmandocs/checkouts/stable/docs/source'))[source]

This function does the initial setup of the analysis, including creating a table with information on the observations. This table will be saved into ‘filelist.txt’.

Steps:

    1. Creates a MetaData object

    1. Creates a new run directory with the following form, e.g.: ./run/run_2021-01-01_12-34-56_eventname/

    1. Copy and pastes the control file (obs_par.pcf) and the fit parameters file (fit_par.txt) into the new directory

    1. Reads in all fits files and creates a table which will be saved in filelist.txt.

    1. Saves metadata into a file called something like ./run/run_2021-01-01_12-34-56_eventname/WFC3_eventname_Meta_Save.dat

The information listed in filelist.txt are:

  • filenames: The name of the observational file (the file will end with .ima)

  • instr: The specific filter or grism used in this observation (taken from the header)

  • ivisit: The visit number when the observation was taken (will be calculated in s00)

  • iorbit: The orbit number when the observation was taken (will be calculated in s00)

  • t_mjd: Mid exposure time (exposure start and end is taken from the header)

  • t_visit: Time elapsed since the first exposure in the visit

  • t_orbit: Time elapsed since the first exposure in the visit

  • scan: Scan direction (0: forward - lower flux, 1: reverse - higher flux, -1: postarg2=0)

  • exp: Exposure time

Note

We use the following approach to determine the visit and orbit number:

  • if two exposures arent in the same orbit and more than an orbital period apart -> not subsequent orbits but a new visit

  • if two exposures are more than 10 min apart but less than an orbital period -> subsequent orbits

  • else: two exposures less than 10 mins apart -> same orbit and same visit

Note

We use the following approach to determine the scan direction:

  • if postarg2 < 0 –> scans[i] = 1 –> reverse scan

  • if postarg2 == 0 –> scans[i] = -1 –> no scan direction given

  • else: –> scans[i] = 0 –> forward scan

Parameters:
eventlabel: str

The label given to the event in the run script. Will determine the name of the run directory

Returns:
meta

Meta object with all the meta data stored in s00

s01_horizons

pacman.s01_horizons.run01(eventlabel, workdir: Path, meta=None)[source]

This function downloads the location of HST during the observations.

Warning

This step needs an internet connection!

Parameters:
eventlabelstr

the label given to the event in the run script. Will determine the name of the run directory

workdirstr

the name of the work directory.

meta

the name of the metadata file

Returns:
meta

meta object with all the meta data stored in s00

s02_barycorr

pacman.s02_barycorr.run02(eventlabel: str, workdir: Path, meta=None)[source]

Performs the barycentric correction of the observation times

  • performs the barycentric correction based on the t_mjd in filelist.txt.

  • Adds another column to filelist.txt called t_bjd

  • Plots will be saved in ./run/run_2021-01-01_12-34-56_eventname/ancil/horizons

Parameters:
eventlabelstr

the label given to the event in the run script. Will determine the name of the run directory

workdirstr

the name of the work directory.

meta

the name of the metadata file

Returns:
meta

meta object with all the meta data stored in s01

Notes

History:

Written by Sebastian Zieba December 2021

s03_refspectra

pacman.s03_refspectra.run03(eventlabel: str, workdir: Path, meta=None)[source]

Retrieves the bandpass (G102 or G141) and the stellar spectrum and takes the product to create a reference spectrum.

Options for the stellar model: - Blackbody - k93models - ck04models - phoenix

The last three stellar models are retrieved from https://archive.stsci.edu/hlsps/reference-atlases/cdbs/grid/

Parameters:
eventlabelstr

The label given to the event in the run script. Will determine the name of the run directory

workdirstr

The name of the work directory.

meta

The name of the metadata file.

Returns:
meta

Meta object with all the meta data stored in s02.

Notes

History:

Written by Sebastian Zieba December 2021

s10_direct_images

This code computes the mean position of the direct image for each visit

pacman.s10_direct_images.run10(eventlabel, workdir: Path, meta=None)[source]

Opens the direct images to determine the position of the star on the detector. The positions are then saved in x and y physical pixel coordinates into a new txt file called xrefyref.txt.

s20_extract

pacman.s20_extract.run20(eventlabel, workdir: Path, meta=None)[source]

This function extracts the spectrum and saves the total flux and the flux as a function of wavelength into files.

s21_bin_spectroscopic_lc

This code reads in the optimally extracted lightcurve and bins it into channels 5 pixels wide, following Berta ‘12

pacman.s21_bin_spectroscopic_lc.run21(eventlabel, workdir: Path, meta=None)[source]

This function reads in the lc_spec.txt file with the flux as a function of wavelength and bins it into light curves.

s30_run

pacman.s30_run.run30(eventlabel: str, workdir: Path, meta=None)[source]

This functions reads in the spectroscopic or white light curve(s) and fits a model to them.

pacman.s30_run.update_clips(clips_array)[source]

lib

lib.util

pacman.lib.util.ancil(meta, s10: Optional[bool] = False, s20: Optional[bool] = False)[source]

This function loads in a lot of useful arrays and values into meta.

The following additional information are being loading into meta:

  • norbit: number of orbits

  • nvisit: number of visits

  • files_sp: all spectra files

  • files_di: all direct image files

  • ra: RA of the target in radians (from the header) (Note: this data is taken from the first spectrum file)

  • dec: DEC of the target in radians (from the header) (Note: this data is taken from the first spectrum file)

  • coordtable: a list of files containing the vector information of HST downloaded in s01

Parameters:
meta

metadata object

s10: bool

Is set to True when s10 is being performed

s20: bool

Is set to True when s20 is being performed

Returns:
meta

metadata object

Notes

History:

Written by Sebastian Zieba December 2021

pacman.lib.util.append_fit_output(fit, meta, fitter=None, medians=None)[source]

Appends fit statistics like rms or chi2 into meta lists.

pacman.lib.util.computeRMS(data, maxnbins=None, binstep=1, isrmserr=False)[source]

COMPUTE ROOT-MEAN-SQUARE AND STANDARD ERROR OF DATA FOR VARIOUS BIN SIZES Taken from POET: https://github.com/kevin218/POET/blob/master/code/lib/correlated_noise.py

pacman.lib.util.correct_wave_shift_fct_0(meta, orbnum, cmin, cmax, spec_opt, i)[source]

Use the reference spectrum for the wave cal.

pacman.lib.util.correct_wave_shift_fct_00(meta, orbnum, cmin, cmax, spec_opt, i)[source]

Use the first exposure in the visit for wave cal.

pacman.lib.util.correct_wave_shift_fct_0_lin(meta, orbnum, cmin, cmax, spec_opt, i)[source]

Use the reference spectrum for the wave cal.

pacman.lib.util.correct_wave_shift_fct_1(meta, orbnum, cmin, cmax, spec_opt, x_data_firstexpvisit, y_data_firstexpvisit, i)[source]
pacman.lib.util.correct_wave_shift_fct_1_lin(meta, orbnum, cmin, cmax, spec_opt, x_data_firstexpvisit, y_data_firstexpvisit, i)[source]
pacman.lib.util.create_res_dir(meta)[source]

Creates the result directory depending on which fitters were used.

pacman.lib.util.di_reformat(meta)[source]

This function was introduced because some observations have several DIs per orbit. The user can set in the pcf how they want to determine the DI target position in this case.

pacman.lib.util.format_params_for_Model(theta, params, nvisit, fixed_array, tied_array, free_array)[source]
pacman.lib.util.format_params_for_sampling(params, meta, fit_par)[source]
pacman.lib.util.gaussian_kernel(meta, x, y)[source]

Performs a gaussian kernel over an array. Used in smoothing of the stellar spectrum. Taken from: https://stackoverflow.com/questions/24143320/gaussian-sum-filter-for-irregular-spaced-points.

pacman.lib.util.get_flatfield(meta)[source]

Opens the flat file and uses it for bad pixel masking.

pacman.lib.util.get_wave_grid(meta)[source]

Gets grid of wavelength solutions for each orbit and row.

pacman.lib.util.log_run_setup(meta)[source]

Prepares lists in meta where fit statistics will be saved into.

pacman.lib.util.make_lsq_rprs_txt(vals, errs, idxs, meta)[source]

Saves the rprs vs wvl as a txt file as resulting from the lsq.

pacman.lib.util.make_rprs_txt(vals, errs_lower, errs_upper, meta, fitter=None)[source]

Saves the rprs vs wvl as a txt file as resulting from the sampler.

pacman.lib.util.median_abs_dev(vec)[source]

Used to determine the variance for the background count estimate.

pacman.lib.util.peak_finder(array, i, ii, orbnum, meta)[source]

Finding peaks in an array.

pacman.lib.util.quantile(x, q)[source]
pacman.lib.util.read_fitfiles(meta)[source]

Read in the files (white or spectroscopic) which will be fitted.

pacman.lib.util.read_refspec(meta)[source]

Reads in the reference spectrum.

pacman.lib.util.readfiles(meta)[source]

Reads in the files saved in datadir and saves them into a list.

Parameters:
meta

Metadata object’

Returns:
meta

Metadata object but adds segment_list to metadata containing the sorted data fits files.

Notes

History:

Written by Sebastian Zieba December 2021

pacman.lib.util.residuals2(params, x1, y1, x2, y2)[source]

Calculate residuals for leastsq.

pacman.lib.util.residuals2_lin(params, x1, y1, x2, y2)[source]

Calculate residuals for leastsq.

pacman.lib.util.return_free_array(nvisit, fixed_array, tied_array)[source]

Reads in the fit_par.txt and determines which parameters are free.

pacman.lib.util.save_allandata(binsz, rms, stderr, meta, fitter=None)[source]

Saves the data used to create the Allan deviation plot.

pacman.lib.util.save_fit_output(fit, data, meta)[source]

Saves all the fit statistics like rms or chi2 into an astropy table.

pacman.lib.util.weighted_mean(data, err)[source]

Calculates the weighted mean for data points data with std devs. err.

pacman.lib.util.zero_pad_x(array)[source]
pacman.lib.util.zero_pad_y(array)[source]

lib.read_pcf

This class loads a PACMAN control file (pcf) and lets you querry the parameters and values.

Constructor Parameters

filepathlib.Path

A control file containing the parameters and values.

Notes

A parameter can have one or more values, differet parameters can have different number of values.

The function Param.get(index) automatically interprets the type of the values. If they can be cast into a numeric value retuns a numeric value, otherwise returns a string.

Examples

# Load a pcf file >>> import reader3 as rd >>> reload(rd) >>> pcf = rd.Pcffile(‘/home/patricio/ast/esp01/anal/wa011bs11/run/wa011bs11.pcf’)

# Each parameter has the attribute value, wich is a ndarray: >>> pcf.planet.value …array([‘wa011b’], dtype=’|S6’)

# To get the n-th value of a parameter use pcffile.param.get(n): # if it can’t be converted to a number/bool/etc, it returns a string. >>> pcf.planet.get(0) … ‘wa011b’

>>> pcf.photchan.get(0)
... 1
>>> pcf.fluxunits.get(0)
... True

# Use pcffile.param.value[n] to get the n-th value as string: >>> pcf.aorname.get(0) … 38807808

>>> pcf.aorname.value[0]
... '38807808'

# The function pcffile.param.getarr() returns the numeric/bool/etc # values of a parameter as a nparray: >>> pcf.sigma.value … array([‘4.0’, ‘4.0’], dtype=’|S5’)

>>> pcf.sigma.getarr()
... array([4.0, 4.0], dtype=object)

Modification History

2009-01-02 chris Initial Version.

by Christopher Campo ccampo@gmail.com

2010-03-08 patricio Modified from ccampo version.

by Patricio Cubillos pcubillos@fulbrightmail.org

2010-10-27 patricio Docstring updated 2011-02-12 patricio Merged with ccampo’s tepclass.py 2021-12 Sebastian Zieba Updated for PACMAN usage

class pacman.lib.read_pcf.Param(vals: Any)[source]

Bases: object

Methods

get([index])

Return a numeric/boolean/None/etc.

getarr

get(index: int = 0) Any[source]

Return a numeric/boolean/None/etc. value if possible, else return a string.

getarr()[source]
class pacman.lib.read_pcf.Pcf(params: ndarray)[source]

Bases: object

Methods

make_file

make_file(name: Path) None[source]
pacman.lib.read_pcf.read_pcf(file: Path) None[source]

Function to read the file.

pacman.lib.read_pcf.store_pcf(meta, pcf: Pcf) None[source]

Store values from PACMAN control file as parameters in Meta object.

lib.manageevent

Name

Manage Event

File

manageevnet.py

Description

Routines for handling events.

Package Contents

saveevent(event, filename, save=[‘event’], delete=[])

Saves an event in .dat (using cpickle) and .h5 (using h5py) files.

loadevent(filename, load):

Loads an event stored in .dat and .h5 files.

updateevent(event, filename, add):

Adds parameters given by add from filename to event.

Examples

>>> from manageevent import *

# Save hd209bs51_ini.dat and hd209bs51_ini.h5 files. >>> saveevent(event, ‘d209bs51_ini’,

save=[‘data’, ‘head’,’uncd’, ‘bdmskd’])

# Load the event and its data frames >>> event = loadevent(‘hd209bs51_ini’, [‘data’])

# Load uncd and bdmsk into event: >>> updateevent(event, ‘hd209bs51_ini’, [‘uncd’, ‘bdmskd’])

Revisions

2010-07-10 patricio joined loadevent and pcubillos@fulbrightmail.org

saveevent into this package. updateevent added.

2010-11-12 patricio reimplemented using exec()

pacman.lib.manageevent.loadevent(filename: Path, load: Optional[List[str]] = [], loadfilename: Optional[bool] = None)[source]

Loads an event stored in .dat and .h5 files.

Parameters:
filenamepathlib.Path

Path to the event file.

loadlist of str

The elements of this tuple contain the parameters to read. We usually use the values: ‘data’, ‘uncd’, ‘head’, ‘bdmskd’, ‘brmskd’ or ‘mask’.

Returns:
This function return an Event instance.

Notes

The input filename should not have the .dat nor the .h5 extentions.

Examples

See package example.

pacman.lib.manageevent.saveevent(event, filename: Path, save: Optional[List[str]] = [], delete: Optional[List[str]] = [], protocol: Optional[int] = 3)[source]

Saves an event in .dat (using cpickle) and .h5 (using h5py) files.

Parameters:
event

An Event instance.

filenamepathlib.Path

Path to the event file.

savelist of str, optional

The elements of this tuple contain the parameters to save. We usually use the values: ‘data’, ‘uncd’, ‘head’, ‘bdmskd’, ‘brmksd’ or ‘mask’.

deletelist of str, optional

Parameters to be deleted.

Notes

The input filename should not have the .dat nor the .h5 extentions. Side effect: This routine deletes all parameters except ‘event’ after saving it.

Examples

See package example.

pacman.lib.manageevent.updateevent(event, filename: Path, add: List[str])[source]

Adds parameters given by add from filename to event.

Parameters:
eventAn Event instance.
filenamepathlib.Path

Path to the event file.

addlist of str

The elements of this tuple contain the parameters to add. We usually use the values: ‘data’, ‘uncd’, ‘head’, ‘bdmskd’, ‘brmaskd’ or ‘mask’.

Returns:
This function return an Event instance.

Notes

The input filename should not have the .dat nor the .h5 extentions.

Examples

See package example.

lib.update_meta

pacman.lib.update_meta.update_meta(eventlabel, workdir: Path) int[source]

This function reloads the MetaData file from a certain work directory. This is needed if a user changed the pcf file in the work directory.

Notes

History:

Written by Sebastian Zieba December 2021

lib.suntimecorr

Author: carthik Revision: 267 Date: 2010-06-08 22:33:22 -0400 (Tue, 08 Jun 2010) HeadURL: file:///home/esp01/svn/code/python/branches/patricio/photpipe/lib/suntimecorr.py Id: suntimecorr.py 267 2010-06-09 02:33:22Z carthik

pacman.lib.suntimecorr.getcoords(file)[source]

Use regular expressions to extract X,Y,Z, and time values from the horizons file.

Parameters:
fileStrings list

A list containing the lines of a horizons file.

Returns:
A four elements list containing the X, Y, Z, and time arrays of
values from file.
pacman.lib.suntimecorr.suntimecorr(meta, obst, coordtable: List[Path], verbose=False)[source]

This function calculates the light-travel time correction from observer to a standard location. It uses the 2D coordinates (RA and DEC) of the object being observed and the 3D position of the observer relative to the standard location. The latter (and the former, for solar-system objects) may be gotten from JPL’s Horizons system.

Parameters:
meta

includes ra, dec and other information

obstfloat or numpy.ndarray

Time of observation in Julian Date (may be a vector)

coordtablestr

Filename of output table from JPL HORIZONS specifying the position of the observatory relative to the standard position.

verbosebool

If True, print X,Y,Z coordinates.

Returns:
This function returns the time correction in seconds to be ADDED
to the observation time to get the time when the observed photons
would have reached the plane perpendicular to their travel and
containing the reference position.

Notes

The position vectors from coordtable are given in the following coordinate system: Reference epoch: J2000.0 xy-plane: plane of the Earth’s mean equator at the reference epoch x-axis : out along ascending node of instantaneous plane of the Earth’s

orbit and the Earth’s mean equator at the reference epoch

z-axis : along the Earth mean north pole at the reference epoch

Ephemerides are often calculated for BJD, barycentric Julian date. That is, they are correct for observations taken at the solar system barycenter’s distance from the target. The BJD of our observation is the time the photons we observe would have crossed the sphere centered on the object and containing the barycenter. We must thus add the light-travel time from our observatory to this sphere. For non-solar-system observations, we approximate the sphere as a plane, and calculate the dot product of the vector from the barycenter to the telescope and a unit vector to from the barycenter to the target, and divide by the speed of light.

Properly, the coordinates should point from the standard location to the object. Practically, for objects outside the solar system, the adjustment from, e.g., geocentric (RA-DEC) coordinates to barycentric coordinates has a negligible effect on the trig functions used in the routine.

The horizons file in coordtable should be in the form of the following example, with a subject line of JOB:

!$$SOF ! ! Example e-mail command file. If mailed to “horizons@ssd.jpl.nasa.gov” ! with subject “JOB”, results will be mailed back. ! ! This example demonstrates a subset of functions. See main doc for ! full explanation. Send blank e-mail with subject “BATCH-LONG” to ! horizons@ssd.jpl.nasa.gov for complete example. !

EMAIL_ADDR = ‘shl35@cornell.edu’ ! Send output to this address

! (can be blank for auto-reply)

COMMAND = ‘-79’ ! Target body, closest apparition

OBJ_DATA = ‘YES’ ! No summary of target body data MAKE_EPHEM = ‘YES’ ! Make an ephemeris

START_TIME = ‘2005-Aug-24 06:00’ ! Start of table (UTC default) STOP_TIME = ‘2005-Aug-25 02:00’ ! End of table STEP_SIZE = ‘1 hour’ ! Table step-size

TABLE_TYPE = ‘VECTOR’ ! Specify VECTOR ephemeris table type CENTER = @10’ ! Set observer (coordinate center) REF_PLANE = ‘FRAME’ ! J2000 equatorial plane

VECT_TABLE = ‘3’ ! Selects output type (3=all).

OUT_UNITS = ‘KM-S’ ! Vector units# KM-S, AU-D, KM-D CSV_FORMAT = ‘NO’ ! Comma-separated output (YES/NO) VEC_LABELS = ‘YES’ ! Label vectors in output (YES/NO) VECT_CORR = ‘NONE’ ! Correct for light-time (LT),

! or lt + stellar aberration (LT+S), ! or (NONE) return geometric ! vectors only.

!$$EOF

lib.plots

pacman.lib.plots.badmask_2d(array1, array2, array3, meta, i)[source]

Plots the badmask arrays which are used by the optimal extraction routine.

pacman.lib.plots.barycorr(x, y, z, time, obsx, obsy, obsz, coordtable: List[Path], meta)[source]

This function plots the vectorfile positions of HST and where the observations where taken

Parameters:
x: array

X position from vectorfile.

y: array

Y position from vectorfile.

z: array

Z position from vectorfile.

time: array

times from the vectorfile.

obsx: array

X position of observations.

obsy: array

Y position of observations.

obsz: array

Z position of observations.

coordtablelist of pathlib.Path

a list of files containing the vector information of HST downloaded in s01.

meta

the name of the metadata file.

Returns:
Saves and/or Shows a plot.

Notes

History:

Written by Sebastian Zieba December 2021

pacman.lib.plots.bkg_evo(bkg_evo, meta)[source]

Plot of the background flux as a function of up the ramp sample.

pacman.lib.plots.bkg_hist(fullframe_diff, skymedian, meta, i, ii)[source]

Plot saving a histogram of the fluxes in the up-the-ramp sample. Showing the user decided background threshold and the median flux below the threshold.

pacman.lib.plots.drift(leastsq_res_all, meta)[source]
pacman.lib.plots.drift_lin(leastsq_res_all, meta)[source]
pacman.lib.plots.dyplot_cornerplot(results, meta)[source]
pacman.lib.plots.dyplot_runplot(results, meta)[source]

Plot a summary of the run.

pacman.lib.plots.dyplot_traceplot(results, meta)[source]

Plot traces and 1-D marginalized posteriors.

pacman.lib.plots.image(dat, ima, results, i, meta)[source]

This plots the full direct image with the guess of the target (defined using di_rmin, etc.) marked as a red box. It also plots a zoom into the guess position of the target with the gaussian fit solution marked with a cross.

pacman.lib.plots.image_quick(ima, i, meta)[source]

This plots the full direct image.

pacman.lib.plots.lsq_rprs(vals, errs, idxs, meta)[source]

Plots the spectrum (rprs vs wvl) as fitted by the least square routine.

pacman.lib.plots.mcmc_chains(ndim, sampler, nburn, labels, meta)[source]

Plots the temporal evolution of the MCMC chain.

pacman.lib.plots.mcmc_pairs(samples, params, meta, fit_par, data)[source]

Plots a pairs plot of the MCMC.

pacman.lib.plots.mcmc_rprs(vals_mcmc, errs_lower_mcmc, errs_upper_mcmc, meta)[source]

Plots the spectrum (rprs vs wvl) as resulting from the MCMC.

pacman.lib.plots.mjd_to_isot(time)[source]

Converts a list of MJDs to a list of dates in YYYY-MM-DD.

pacman.lib.plots.mjd_to_utc(time)[source]

Converts a list of MJDs to a list of dates in years.

pacman.lib.plots.nested_pairs(samples, params, meta, fit_par, data)[source]

Plots a pairs plot of the nested sampling.

pacman.lib.plots.nested_rprs(vals_nested, errs_lower_nested, errs_upper_nested, meta)[source]

Plots the spectrum (rprs vs wvl) as resulting from the nested sampling.

pacman.lib.plots.obs_times(meta, times, ivisits, iorbits, updated=False)[source]

Plot of the visit index as a function of observed time for the observations. Includes a table with the number of orbits in each visit and a zoom into each visit.

Parameters:
updatedbool

If the user decided to not use all visits but set some “which_visits” in the pcf, this bool is need to save a plot for all files in the data directory and a plot for the onces defined with “which_visits”. It prevents that when the function is being called again, the previous plot isnt overwritten.

pacman.lib.plots.params_vs_wvl(vals, errs, idxs, meta)[source]

Plots every fitted parameter as a function of bin. It is able to show how astrophysical & systematical parameters change over wavelength.

pacman.lib.plots.params_vs_wvl_mcmc(vals_mcmc, errs_lower_mcmc, errs_upper_mcmc, meta)[source]

Plots every fitted parameter as a function of bin. It is able to show how astrophysical & systematical parameters change over wavelength.

pacman.lib.plots.params_vs_wvl_nested(vals_nested, errs_lower_nested, errs_upper_nested, meta)[source]

Plots every fitted parameter as a function of bin. It is able to show how astrophysical & systematical parameters change over wavelength.

pacman.lib.plots.plot_fit_lc2(data, fit, meta, mcmc=False, nested=False)[source]

Plots phase folded fit.

pacman.lib.plots.plot_fit_lc3(data, fit, meta, mcmc=False)[source]

Plots light curve without systematics model.

pacman.lib.plots.plot_raw(data, meta)[source]

Saves a plot with the raw light curve (which includes the systematics).

pacman.lib.plots.plot_wvl_bins(w_hires, f_interp, wave_bins, wvl_bins, dirname)[source]

Plot of a 1D spectrum and the bins.

pacman.lib.plots.refspec(bp_wvl, bp_val, sm_wvl, sm_flux, ref_wvl, ref_flux, meta)[source]

Plots the bandpass, the stellar spectrum and the product of the both.

pacman.lib.plots.refspec_fit(modelx, modely, p0, datax, datay, leastsq_res, meta, i)[source]
pacman.lib.plots.refspec_fit_lin(modelx, modely, p0, datax, datay, leastsq_res, meta, i)[source]
pacman.lib.plots.rmsplot(model, data, meta, fitter=None)[source]

Plot RMS vs. bin size looking for time-correlated noise Taken from POET: https://github.com/kevin218/POET/blob/master/code/lib/plots.py.

pacman.lib.plots.save_astrolc_data(data, fit, meta)[source]

Saves the data used to plot the astrophysical model (without the systematics) and the data (without the systematics) not phase folded.

pacman.lib.plots.save_plot_raw_data(data, meta)[source]

Saves the data used for the raw light curve plot.

pacman.lib.plots.smooth(meta, x, y, x_smoothed, y_smoothed)[source]

Plots the raw stellar spectrum and the smoothed spectrum.

pacman.lib.plots.sp1d(template_waves, spec_box, meta, i, spec_opt=False)[source]

Plots the resulting spectrum. If the user did optimal extraction, a comparison between optextr and box sum will be shown.

pacman.lib.plots.sp1d_diff(sp1d_all_diff, meta, wvl_hires)[source]

Difference of 1D spectrum between two consecutive exposures.

pacman.lib.plots.sp2d(d, meta, i)[source]

Plot the spectrum with a low vmax to make the background better visible.

pacman.lib.plots.trace(d, meta, visnum, orbnum, i)[source]

Plots the spectrum together with the trace.

pacman.lib.plots.utc_to_mjd(time)[source]

Converts a list of dates in years to a list of MJDs.

pacman.lib.plots.utr(diff, meta, i, ii, orbnum, rowmedian, rowmedian_absder, peaks)[source]

Saves a plot of up-the-ramp sample, the row by row sum and the derivate of the latter. It furthermore shows the aperture used for the analysis.

pacman.lib.plots.utr_aper_evo(peaks_all, meta)[source]

Plot of the evolution in aperture size.

lib.stellar_spectrum

pacman.lib.stellar_spectrum.downloader(url: str) None[source]

This function downloads a file from the given url using urllib.request.

pacman.lib.stellar_spectrum.find_nearest(array: Union[Sequence[Sequence[Sequence[Sequence[Sequence[Any]]]]], _SupportsArray[dtype], Sequence[_SupportsArray[dtype]], Sequence[Sequence[_SupportsArray[dtype]]], Sequence[Sequence[Sequence[_SupportsArray[dtype]]]], Sequence[Sequence[Sequence[Sequence[_SupportsArray[dtype]]]]], bool, int, float, complex, str, bytes, Sequence[Union[bool, int, float, complex, str, bytes]], Sequence[Sequence[Union[bool, int, float, complex, str, bytes]]], Sequence[Sequence[Sequence[Union[bool, int, float, complex, str, bytes]]]], Sequence[Sequence[Sequence[Sequence[Union[bool, int, float, complex, str, bytes]]]]]], value: float)[source]

Finds nearest element to a value in an array.

Taken from https://stackoverflow.com/questions/2566412/find-nearest-value-in-numpy-array

pacman.lib.stellar_spectrum.get_bb(user_teff)[source]

Creates a blackbody spectrum for a given stellar effective temperature, Teff.

Parameters:
user_teff: float

stellar effective temperature

Returns:
wvl: numpy array

wavelength np.linspace(0.1, 6, 1000) / 1e6

flux: numpy array

stellar flux in units of W/sr/m^3

pacman.lib.stellar_spectrum.get_sm(meta, user_met, user_logg: float, user_teff: float)[source]

Creates a Kurucz 1994, Castelli and Kurucz 2004 or Phoenix stellar spectrum for a given stellar effective temperature, metallicity and log g.

Parameters:
meta

a metadata instance.

user_metfloat

stellar metallicity.

user_loggfloat

stellar logg.

user_tefffloat

stellar effective temperature.

Returns:
wvlnumpy array

wavelength np.linspace(0.1, 6, 1000) / 1e6.

fluxnumpy array

stellar flux in units of W/sr/m^3.

lib.gaussfitter

Latest version available at <http://code.google.com/p/agpy/source/browse/trunk/agpy/gaussfitter.py>

Note about mpfit/leastsq: I switched everything over to the Markwardt mpfit routine for a few reasons, but foremost being the ability to set limits on parameters, not just force them to be fixed. As far as I can tell, leastsq does not have that capability.

The version of mpfit I use can be found here:

http://code.google.com/p/agpy/source/browse/trunk/mpfit

pacman.lib.gaussfitter.collapse_gaussfit(cube, xax=None, axis=2, negamp=False, usemoments=True, nsigcut=1.0, mppsigcut=1.0, return_errors=False, **kwargs)[source]
pacman.lib.gaussfitter.gaussfit(data, err=None, params=(), autoderiv=True, return_all=False, circle=False, fixed=array([False, False, False, False, False, False, False]), limitedmin=[False, False, False, False, True, True, True], limitedmax=[False, False, False, False, False, False, True], usemoment=array([], dtype=bool), minpars=array([0, 0, 0, 0, 0, 0, 0]), maxpars=[0, 0, 0, 0, 0, 0, 360], rotate=1, vheight=1, quiet=True, returnmp=False, returnfitimage=False, **kwargs)[source]

Gaussian fitter with the ability to fit a variety of different forms of 2-dimensional gaussian.

Input Parameters:

data - 2-dimensional data array err=None - error array with same size as data array params=[] - initial input parameters for Gaussian function.

(height, amplitude, x, y, width_x, width_y, rota) if not input, these will be determined from the moments of the system, assuming no rotation

autoderiv=1 - use the autoderiv provided in the lmder.f function (the

alternative is to us an analytic derivative with lmdif.f: this method is less robust)

return_all=0 - Default is to return only the Gaussian parameters.

1 - fit params, fit error

returnfitimage - returns (best fit params,best fit image) returnmp - returns the full mpfit struct circle=0 - default is an elliptical gaussian (different x, y widths),

but can reduce the input by one parameter if it’s a circular gaussian

rotate=1 - default allows rotation of the gaussian ellipse. Can remove

last parameter by setting rotate=0. numpy.expects angle in DEGREES

vheight=1 - default allows a variable height-above-zero, i.e. an

additive constant for the Gaussian function. Can remove first parameter by setting this to 0

usemoment - can choose which parameters to use a moment estimation for.

Other parameters will be taken from params. Needs to be a boolean array.

Output:
Default output is a set of Gaussian parameters with the same shape as

the input parameters

Can also output the covariance matrix, ‘infodict’ that contains a lot

more detail about the fit (see scipy.optimize.leastsq), and a message from leastsq telling what the exit status of the fitting routine was

Warning: Does NOT necessarily output a rotation angle between 0 and 360 degrees.

pacman.lib.gaussfitter.moments(data, circle, rotate, vheight, estimator=<function median>, **kwargs)[source]

Returns (height, amplitude, x, y, width_x, width_y, rotation angle) the gaussian parameters of a 2D distribution by calculating its moments. Depending on the input parameters, will only output a subset of the above.

If using masked arrays, pass estimator=numpy.ma.median.

pacman.lib.gaussfitter.multigaussfit(xax, data, ngauss=1, err=None, params=[1, 0, 1], fixed=[False, False, False], limitedmin=[False, False, True], limitedmax=[False, False, False], minpars=[0, 0, 0], maxpars=[0, 0, 0], quiet=True, shh=True, veryverbose=False)[source]

An improvement on onedgaussfit. Lets you fit multiple gaussians.

Inputs:

xax - x axis data - y axis ngauss - How many gaussians to fit? Default 1 (this could supersede onedgaussfit) err - error corresponding to data

These parameters need to have length = 3*ngauss. If ngauss > 1 and length = 3, they will be replicated ngauss times, otherwise they will be reset to defaults:

params - Fit parameters: [amplitude, offset, width] * ngauss

If len(params) % 3 == 0, ngauss will be set to len(params) / 3

fixed - Is parameter fixed? limitedmin/minpars - set lower limits on each parameter (default: width>0) limitedmax/maxpars - set upper limits on each parameter

quiet - should MPFIT output each iteration? shh - output final parameters?

Returns:

Fit parameters Model Fit errors chi2

pacman.lib.gaussfitter.n_gaussian(pars=None, a=None, dx=None, sigma=None)[source]

Returns a function that sums over N gaussians, where N is the length of a,dx,sigma OR N = len(pars) / 3

The background “height” is assumed to be zero (you must “baseline” your spectrum before fitting)

pars - a list with len(pars) = 3n, assuming a,dx,sigma repeated dx - offset (velocity center) values sigma - line widths a - amplitudes

pacman.lib.gaussfitter.onedgaussfit(xax, data, err=None, params=[0, 1, 0, 1], fixed=[False, False, False, False], limitedmin=[False, False, False, True], limitedmax=[False, False, False, False], minpars=[0, 0, 0, 0], maxpars=[0, 0, 0, 0], quiet=True, shh=True, veryverbose=False, vheight=True, negamp=False, usemoments=False)[source]
Inputs:

xax - x axis data - y axis err - error corresponding to data

params - Fit parameters: Height of background, Amplitude, Shift, Width fixed - Is parameter fixed? limitedmin/minpars - set lower limits on each parameter (default: width>0) limitedmax/maxpars - set upper limits on each parameter quiet - should MPFIT output each iteration? shh - output final parameters? usemoments - replace default parameters with moments

Returns:

Fit parameters Model Fit errors chi2

pacman.lib.gaussfitter.onedgaussian(x, H, A, dx, w)[source]

Returns a 1-dimensional gaussian of form H+A*numpy.exp(-(x-dx)**2/(2*w**2))

pacman.lib.gaussfitter.onedmoments(Xax, data, vheight=True, estimator=<function median>, negamp=None, veryverbose=False, **kwargs)[source]

Returns (height, amplitude, x, width_x) the gaussian parameters of a 1D distribution by calculating its moments. Depending on the input parameters, will only output a subset of the above.

If using masked arrays, pass estimator=numpy.ma.median ‘estimator’ is used to measure the background level (height)

negamp can be used to force the peak negative (True), positive (False), or it will be “autodetected” (negamp=None)

pacman.lib.gaussfitter.twodgaussian(inpars, circle=False, rotate=True, vheight=True, shape=None)[source]

Returns a 2d gaussian function of the form: x’ = numpy.cos(rota) * x - numpy.sin(rota) * y y’ = numpy.sin(rota) * x + numpy.cos(rota) * y (rota should be in degrees) g = b + a * numpy.exp ( - ( ((x-center_x)/width_x)**2 + ((y-center_y)/width_y)**2 ) / 2 )

inpars = [b,a,center_x,center_y,width_x,width_y,rota]

(b is background height, a is peak amplitude)

where x and y are the input parameters of the returned function, and all other parameters are specified by this function

However, the above values are passed by list. The list should be: inpars = (height,amplitude,center_x,center_y,width_x,width_y,rota)

You can choose to ignore / neglect some of the above input parameters

unumpy.sing the following options: circle=0 - default is an elliptical gaussian (different x, y

widths), but can reduce the input by one parameter if it’s a circular gaussian

rotate=1 - default allows rotation of the gaussian ellipse. Can

remove last parameter by setting rotate=0

vheight=1 - default allows a variable height-above-zero, i.e. an

additive constant for the Gaussian function. Can remove first parameter by setting this to 0

shape=None - if shape is set (to a 2-parameter list) then returns

an image with the gaussian defined by inpars

lib.mpfit

Perform Levenberg-Marquardt least-squares minimization, based on MINPACK-1.

AUTHORS

The original version of this software, called LMFIT, was written in FORTRAN as part of the MINPACK-1 package by XXX.

Craig Markwardt converted the FORTRAN code to IDL. The information for the IDL version is:

Craig B. Markwardt, NASA/GSFC Code 662, Greenbelt, MD 20770

craigm@lheamail.gsfc.nasa.gov UPDATED VERSIONs can be found on my WEB PAGE:

Mark Rivers created this Python version from Craig’s IDL version.

Mark Rivers, University of Chicago Building 434A, Argonne National Laboratory 9700 South Cass Avenue, Argonne, IL 60439 rivers@cars.uchicago.edu Updated versions can be found at http://cars.uchicago.edu/software

Sergey Koposov converted the Mark’s Python version from Numeric to numpy

Sergey Koposov, University of Cambridge, Institute of Astronomy, Madingley road, CB3 0HA, Cambridge, UK koposov@ast.cam.ac.uk Updated versions can be found at http://code.google.com/p/astrolibpy/source/browse/trunk/

DESCRIPTION

MPFIT uses the Levenberg-Marquardt technique to solve the least-squares problem. In its typical use, MPFIT will be used to fit a user-supplied function (the “model”) to user-supplied data points (the “data”) by adjusting a set of parameters. MPFIT is based upon MINPACK-1 (LMDIF.F) by More’ and collaborators.

For example, a researcher may think that a set of observed data points is best modelled with a Gaussian curve. A Gaussian curve is parameterized by its mean, standard deviation and normalization. MPFIT will, within certain constraints, find the set of parameters which best fits the data. The fit is “best” in the least-squares sense; that is, the sum of the weighted squared differences between the model and data is minimized.

The Levenberg-Marquardt technique is a particular strategy for iteratively searching for the best fit. This particular implementation is drawn from MINPACK-1 (see NETLIB), and is much faster and more accurate than the version provided in the Scientific Python package in Scientific.Functions.LeastSquares. This version allows upper and lower bounding constraints to be placed on each parameter, or the parameter can be held fixed.

The user-supplied Python function should return an array of weighted deviations between model and data. In a typical scientific problem the residuals should be weighted so that each deviate has a gaussian sigma of 1.0. If X represents values of the independent variable, Y represents a measurement for each value of X, and ERR represents the error in the measurements, then the deviates could be calculated as follows:

DEVIATES = (Y - F(X)) / ERR

where F is the analytical function representing the model. You are recommended to use the convenience functions MPFITFUN and MPFITEXPR, which are driver functions that calculate the deviates for you. If ERR are the 1-sigma uncertainties in Y, then

TOTAL( DEVIATES^2 )

will be the total chi-squared value. MPFIT will minimize the chi-square value. The values of X, Y and ERR are passed through MPFIT to the user-supplied function via the FUNCTKW keyword.

Simple constraints can be placed on parameter values by using the PARINFO keyword to MPFIT. See below for a description of this keyword.

MPFIT does not perform more general optimization tasks. See TNMIN instead. MPFIT is customized, based on MINPACK-1, to the least-squares minimization problem.

USER FUNCTION

The user must define a function which returns the appropriate values as specified above. The function should return the weighted deviations between the model and the data. It should also return a status flag and an optional partial derivative array. For applications which use finite-difference derivatives – the default – the user function should be declared in the following way:

def myfunct(p, fjac=None, x=None, y=None, err=None)

# Parameter values are passed in “p” # If fjac==None then partial derivatives should not be # computed. It will always be None if MPFIT is called with default # flag. model = F(x, p) # Non-negative status value means MPFIT should continue, negative means # stop the calculation. status = 0 return([status, (y-model)/err]

See below for applications with analytical derivatives.

The keyword parameters X, Y, and ERR in the example above are suggestive but not required. Any parameters can be passed to MYFUNCT by using the functkw keyword to MPFIT. Use MPFITFUN and MPFITEXPR if you need ideas on how to do that. The function must accept a parameter list, P.

In general there are no restrictions on the number of dimensions in X, Y or ERR. However the deviates must be returned in a one-dimensional Numeric array of type Float.

User functions may also indicate a fatal error condition using the status return described above. If status is set to a number between -15 and -1 then MPFIT will stop the calculation and return to the caller.

ANALYTIC DERIVATIVES

In the search for the best-fit solution, MPFIT by default calculates derivatives numerically via a finite difference approximation. The user-supplied function need not calculate the derivatives explicitly. However, if you desire to compute them analytically, then the AUTODERIVATIVE=0 keyword must be passed to MPFIT. As a practical matter, it is often sufficient and even faster to allow MPFIT to calculate the derivatives numerically, and so AUTODERIVATIVE=0 is not necessary.

If AUTODERIVATIVE=0 is used then the user function must check the parameter FJAC, and if FJAC!=None then return the partial derivative array in the return list. def myfunct(p, fjac=None, x=None, y=None, err=None) # Parameter values are passed in “p” # If FJAC!=None then partial derivatives must be comptuer. # FJAC contains an array of len(p), where each entry # is 1 if that parameter is free and 0 if it is fixed. model = F(x, p) Non-negative status value means MPFIT should continue, negative means # stop the calculation. status = 0 if (dojac): pderiv = zeros([len(x), len(p)], Float) for j in range(len(p)): pderiv[:,j] = FGRAD(x, p, j) else: pderiv = None return([status, (y-model)/err, pderiv]

where FGRAD(x, p, i) is a user function which must compute the derivative of the model with respect to parameter P[i] at X. When finite differencing is used for computing derivatives (ie, when

AUTODERIVATIVE=1), or when MPFIT needs only the errors but not the

derivatives the parameter FJAC=None.

Derivatives should be returned in the PDERIV array. PDERIV should be an m x n array, where m is the number of data points and n is the number of parameters. dp[i,j] is the derivative at the ith point with respect to the jth parameter.

The derivatives with respect to fixed parameters are ignored; zero is an appropriate value to insert for those derivatives. Upon input to the user function, FJAC is set to a vector with the same length as P, with a value of 1 for a parameter which is free, and a value of zero for a parameter which is fixed (and hence no

derivative needs to be calculated).

If the data is higher than one dimensional, then the last dimension should be the parameter dimension. Example: fitting a 50x50 image, “dp” should be 50x50xNPAR.

CONSTRAINING PARAMETER VALUES WITH THE PARINFO KEYWORD

The behavior of MPFIT can be modified with respect to each parameter to be fitted. A parameter value can be fixed; simple boundary constraints can be imposed; limitations on the parameter changes can be imposed; properties of the automatic derivative can be modified; and parameters can be tied to one another.

These properties are governed by the PARINFO structure, which is passed as a keyword parameter to MPFIT.

PARINFO should be a list of dictionaries, one list entry for each parameter. Each parameter is associated with one element of the array, in numerical order. The dictionary can have the following keys (none are required, keys are case insensitive):

‘value’ - the starting parameter value (but see the START_PARAMS

parameter for more information).

‘fixed’ - a boolean value, whether the parameter is to be held fixed or not. Fixed parameters are not varied by MPFIT, but are passed on to MYFUNCT for evaluation.

‘limited’ - a two-element boolean array. If the first/second element is set, then the parameter is bounded on the lower/upper side. A parameter can be bounded on both

sides. Both LIMITED and LIMITS must be given

together.

‘limits’ - a two-element float array. Gives the

parameter limits on the lower and upper sides, respectively. Zero, one or two of these values can be set, depending on the values of LIMITED. Both LIMITED and LIMITS must be given together.

‘parname’ - a string, giving the name of the parameter. The

fitting code of MPFIT does not use this tag in any way. However, the default iterfunct will print the parameter name if available.

‘step’ - the step size to be used in calculating the numerical

derivatives. If set to zero, then the step size is computed automatically. Ignored when AUTODERIVATIVE=0.

‘mpside’ - the sidedness of the finite difference when computing

numerical derivatives. This field can take four values:

0 - one-sided derivative computed automatically

1 - one-sided derivative (f(x+h) - f(x) )/h

-1 - one-sided derivative (f(x) - f(x-h))/h

2 - two-sided derivative (f(x+h) - f(x-h))/(2*h)

Where H is the STEP parameter described above. The “automatic” one-sided derivative method will chose a direction for the finite difference which does not violate any constraints. The other methods do not perform this check. The two-sided method is in principle more precise, but requires twice as many function evaluations. Default: 0.

‘mpmaxstep’ - the maximum change to be made in the parameter

value. During the fitting process, the parameter will never be changed by more than this value in one iteration.

A value of 0 indicates no maximum. Default: 0.

‘tied’ - a string expression which “ties” the parameter to other

free or fixed parameters. Any expression involving constants and the parameter array P are permitted. Example: if parameter 2 is always to be twice parameter 1 then use the following: parinfo(2).tied = ‘2 * p(1)’. Since they are totally constrained, tied parameters are considered to be fixed; no errors are computed for them. [ NOTE: the PARNAME can’t be used in expressions. ]

‘mpprint’ - if set to 1, then the default iterfunct will print the

parameter value. If set to 0, the parameter value will not be printed. This tag can be used to selectively print only a few parameter values out of many. Default: 1 (all parameters printed)

Future modifications to the PARINFO structure, if any, will involve adding dictionary tags beginning with the two letters “MP”. Therefore programmers are urged to avoid using tags starting with the same letters; otherwise they are free to include their own fields within the PARINFO structure, and they will be ignored.

PARINFO Example:
parinfo = [{‘value’:0., ‘fixed’:0, ‘limited’:[0,0], ‘limits’:[0.,0.]}

for i in range(5)]

parinfo[0][‘fixed’] = 1 parinfo[4][‘limited’][0] = 1 parinfo[4][‘limits’][0] = 50. values = [5.7, 2.2, 500., 1.5, 2000.] for i in range(5): parinfo[i][‘value’]=values[i]

A total of 5 parameters, with starting values of 5.7, 2.2, 500, 1.5, and 2000 are given. The first parameter is fixed at a value of 5.7, and the last parameter is constrained to be above 50.

EXAMPLE

import mpfit import numpy.oldnumeric as Numeric x = arange(100, float) p0 = [5.7, 2.2, 500., 1.5, 2000.] y = ( p[0] + p[1]*[x] + p[2]*[x**2] + p[3]*sqrt(x) +

p[4]*log(x))

fa = {‘x’:x, ‘y’:y, ‘err’:err} m = mpfit(‘myfunct’, p0, functkw=fa) print ‘status = ‘, m.status if (m.status <= 0): print ‘error message = ‘, m.errmsg print ‘parameters = ‘, m.params

Minimizes sum of squares of MYFUNCT. MYFUNCT is called with the X, Y, and ERR keyword parameters that are given by FUNCTKW. The results can be obtained from the returned object m.

THEORY OF OPERATION

There are many specific strategies for function minimization. One very popular technique is to use function gradient information to realize the local structure of the function. Near a local minimum the function value can be taylor expanded about x0 as follows:

f(x) = f(x0) + f’(x0) . (x-x0) + (1/2) (x-x0) . f’’(x0) . (x-x0)

—– ————— ——————————- (1)

Order 0th 1st 2nd

Here f’(x) is the gradient vector of f at x, and f’’(x) is the Hessian matrix of second derivatives of f at x. The vector x is the set of function parameters, not the measured data vector. One can find the minimum of f, f(xm) using Newton’s method, and arrives at the following linear equation:

f’’(x0) . (xm-x0) = - f’(x0) (2)

If an inverse can be found for f’’(x0) then one can solve for (xm-x0), the step vector from the current position x0 to the new projected minimum. Here the problem has been linearized (ie, the

gradient information is known to first order). f’’(x0) is

symmetric n x n matrix, and should be positive definite.

The Levenberg - Marquardt technique is a variation on this theme. It adds an additional diagonal term to the equation which may aid the convergence properties:

(f’’(x0) + nu I) . (xm-x0) = -f’(x0) (2a)

where I is the identity matrix. When nu is large, the overall matrix is diagonally dominant, and the iterations follow steepest descent. When nu is small, the iterations are quadratically convergent.

In principle, if f’’(x0) and f’(x0) are known then xm-x0 can be determined. However the Hessian matrix is often difficult or impossible to compute. The gradient f’(x0) may be easier to compute, if even by finite difference techniques. So-called quasi-Newton techniques attempt to successively estimate f’’(x0) by building up gradient information as the iterations proceed.

In the least squares problem there are further simplifications which assist in solving eqn (2). The function to be minimized is a sum of squares:

f = Sum(hi^2) (3)

where hi is the ith residual out of m residuals as described above. This can be substituted back into eqn (2) after computing the derivatives:

f’ = 2 Sum(hi hi’) f’’ = 2 Sum(hi’ hj’) + 2 Sum(hi hi’’) (4)

If one assumes that the parameters are already close enough to a minimum, then one typically finds that the second term in f’’ is negligible [or, in any case, is too difficult to compute]. Thus, equation (2) can be solved, at least approximately, using only gradient information.

In matrix notation, the combination of eqns (2) and (4) becomes:

hT’ . h’ . dx = - hT’ . h (5)

Where h is the residual vector (length m), hT is its transpose, h’ is the Jacobian matrix (dimensions n x m), and dx is (xm-x0). The user function supplies the residual vector h, and in some cases h’ when it is not found by finite differences (see MPFIT_FDJAC2,

which finds h and hT’). Even if dx is not the best absolute step

to take, it does provide a good estimate of the best direction, so often a line minimization will occur along the dx vector direction.

The method of solution employed by MINPACK is to form the Q . R factorization of h’, where Q is an orthogonal matrix such that QT . Q = I, and R is upper right triangular. Using h’ = Q . R and the ortogonality of Q, eqn (5) becomes

(RT . QT) . (Q . R) . dx = - (RT . QT) . h
RT . R . dx = - RT . QT . h (6)

R . dx = - QT . h

where the last statement follows because R is upper triangular. Here, R, QT and h are known so this is a matter of solving for dx. The routine MPFIT_QRFAC provides the QR factorization of h, with pivoting, and MPFIT_QRSOLV provides the solution for dx.

REFERENCES

MINPACK-1, Jorge More’, available from netlib (www.netlib.org). “Optimization Software Guide,” Jorge More’ and Stephen Wright,

SIAM, Frontiers in Applied Mathematics, Number 14.

More’, Jorge J., “The Levenberg-Marquardt Algorithm:

Implementation and Theory,” in Numerical Analysis, ed. Watson,

  1. A., Lecture Notes in Mathematics 630, Springer-Verlag, 1977.

    MODIFICATION HISTORY

Translated from MINPACK-1 in FORTRAN, Apr-Jul 1998, CM

Copyright (C) 1997-2002, Craig Markwardt This software is provided as is without any warranty whatsoever. Permission to use, copy, modify, and distribute modified or unmodified copies is granted, provided this copyright and disclaimer are included unchanged.

Translated from MPFIT (Craig Markwardt’s IDL package) to Python, August, 2002. Mark Rivers Converted from Numeric to numpy (Sergey Koposov, July 2008)

class pacman.lib.mpfit.machar(double=1)[source]

Bases: object

class pacman.lib.mpfit.mpfit(fcn, xall=None, functkw={}, parinfo=None, ftol=1e-10, xtol=1e-10, gtol=1e-10, damp=0.0, maxiter=200, factor=100.0, nprint=1, iterfunct='default', iterkw={}, nocovar=0, rescale=0, autoderivative=1, quiet=0, diag=None, epsfcn=None, debug=0)[source]

Bases: object

Methods

blas_enorm32(x,[n,offx,incx])

blas_enorm64(x,[n,offx,incx])

call(fcn, x, functkw[, fjac])

defiter(fcn, x, iter[, fnorm, functkw, ...])

parinfo([parinfo, key, default, n])

tie(p[, ptied])

calc_covar

enorm

fdjac2

lmpar

qrfac

qrsolv

blas_enorm32(x[, n, offx, incx]) = <fortran dnrm2>
blas_enorm64(x[, n, offx, incx]) = <fortran dnrm2>
calc_covar(rr, ipvt=None, tol=1e-14)[source]
call(fcn, x, functkw, fjac=None)[source]
defiter(fcn, x, iter, fnorm=None, functkw=None, quiet=0, iterstop=None, parinfo=None, format=None, pformat='%.10g', dof=1)[source]
enorm(vec)[source]
fdjac2(fcn, x, fvec, step=None, ulimited=None, ulimit=None, dside=None, epsfcn=None, autoderivative=1, functkw=None, xall=None, ifree=None, dstep=None)[source]
lmpar(r, ipvt, diag, qtb, delta, x, sdiag, par=None)[source]
parinfo(parinfo=None, key='a', default=None, n=0)[source]
qrfac(a, pivot=0)[source]
qrsolv(r, ipvt, diag, qtb, sdiag)[source]
tie(p, ptied=None)[source]

lib.geometry102

pacman.lib.geometry102.dispersion(X_ref, Y_ref)[source]

Calculates coefficients for the dispersion solution See also: https://ui.adsabs.harvard.edu/abs/2009wfc..rept…18K/abstract

Parameters:
eventlabelX_ref and Y_ref

centroid position in physical pixels

pacman.lib.geometry102.trace(X_ref, Y_ref)[source]

Calculates the slope and intercept for the trace, given the position of the direct image in physical pixels. These coefficients are for the WFC3 G102 grism. See also: https://ui.adsabs.harvard.edu/abs/2009wfc..rept…18K/abstract

lib.geometry141

pacman.lib.geometry141.dispersion(X_ref, Y_ref)[source]

Calculates coefficients for the dispersion solution See also: https://ui.adsabs.harvard.edu/abs/2009wfc..rept…17K/abstract

Parameters:
eventlabelX_ref and Y_ref

centroid position in physical pixels

pacman.lib.geometry141.trace(X_ref, Y_ref)[source]

Calculates the slope and intercept for the trace, given the position of the direct image in physical pixels. These coefficients are for the WFC3 G141 grism. See also: https://ui.adsabs.harvard.edu/abs/2009wfc..rept…17K/abstract

lib.optextr

pacman.lib.optextr.diagnostics_plot(D, M, indmax, outlier_array, f_opt, profile, i, ii, meta)[source]
pacman.lib.optextr.optextr(D, err, f_std, var_std, M, nsmooth, sig_cut, save_optextr_plot, i_sp, ii_sp, meta)[source]

Function to optimally extract a spectrum.

Parameters:
D:

data array (already background subtracted)

err:

error array (in addition to photon noise; e.g. error due to background subtraction)

f_std:

box-extracted spectrum (from step 4 of Horne)

var_std:

variance of standard spectrum (also from step 4)

M:

array masking bad pixels; 0 is bad and 1 is good

nsmooth:

number of pixels to smooth over to estimate the spatial profile (7 works well)

sig_cut:

cutoff sigma for flagging outliers (10.0 works well)

diagnostics:

boolean flag specifying whether to make diagnostic plots

Returns:
f_opt, var_opt:

optimally extracted spectrum and its variance

pacman.lib.optextr.smooth(x, nsmooth)[source]

Applies a boxcar smooth of length nsmooth to the vector x. Returns the smoothed vector.

lib.sort_nicely

pacman.lib.sort_nicely.alphanum_key(string: str) str[source]

Turn a string into a list of string and number chunks. “z23a” -> [“z”, 23, “a”].

pacman.lib.sort_nicely.sort_nicely(input_list: List[Union[str, Path]]) List[Union[str, Path]][source]

Sort the given list in the way that humans expect.

pacman.lib.sort_nicely.tryint(s)[source]

lib.read_fit_par

pacman.lib.read_fit_par.get_step_size(data, params, meta, fit_par)[source]

Get the step sizes which were set in the fit_par.txt file by the user

pacman.lib.read_fit_par.read_fit_par_for_ls(parinfo, params_s, data, fit_par)[source]

This function reads in the rows in the fit_par.txt file and saves it into a format so that it can be used with MPFIT.

lib.read_data

class pacman.lib.read_data.Data(data_file, meta, fit_par, clip_idx=[])[source]

Bases: object

Reads in and stores raw light curve data :param data_file: :param obs_par: :param fit_par:

pacman.lib.read_data.new_time(array)[source]

This functions makes sure the time in a visit (orbit) starts with 0 when a new visit (orbit) starts.

pacman.lib.read_data.remove_dupl(seq)[source]

lib.model

class pacman.lib.model.Model(data, myfuncs)[source]

Bases: object

Stores model fit and related parameters

Methods

fit

fit(data, params)[source]
pacman.lib.model.calc_astro(t, params, data, funcs, visit)[source]
pacman.lib.model.calc_gp(idx, params, data, resid, funcs, visit)[source]
pacman.lib.model.calc_sys(t, params, data, funcs, visit)[source]

lib.least_squares

pacman.lib.least_squares.lsq_fit(fit_par, data, meta, model, myfuncs, noclip=False)[source]

Runs the least square routine using MPFIT.

pacman.lib.least_squares.residuals(params, data, model, fjac=None)[source]

Calculates the residuals of the fit.

lib.mcmc

pacman.lib.mcmc.lnprior(theta, data)[source]

Calculate the log-prior.

pacman.lib.mcmc.lnprob(theta, params, data, model, nvisit, fixed_array, tied_array, free_array)[source]

Calculates the log-likelihood.

pacman.lib.mcmc.mcmc_fit(data, model, params, file_name, meta, fit_par)[source]

Calls the emcee package and does the sampling.

lib.models

lib.models.constant

pacman.lib.models.constant.constant(t, data, params, visit=0)[source]

lib.models.constants_cj

pacman.lib.models.constants_cj.constants_cj(t, data, params, visit=0)[source]

Example

In [47]: iexp_orb_sp = np.array([0,1,2,3,0,1,2,3,4])

In [48]: Cs = np.array([[7.8], [8.3], [8.5], [8.6], [8.65]])

In [49]: C_data_mask = [iexp_orb_sp == i for i in range(max(iexp_orb_sp)+1)]

In [50]: C_data_mask Out[50]: [array([ True, False, False, False, True, False, False, False, False]),

array([False, True, False, False, False, True, False, False, False]), array([False, False, True, False, False, False, True, False, False]), array([False, False, False, True, False, False, False, True, False]), array([False, False, False, False, False, False, False, False, True])]

In [51]: C_data_mask*Cs Out[51]: array([[7.8 , 0. , 0. , 0. , 7.8 , 0. , 0. , 0. , 0. ],

[0. , 8.3 , 0. , 0. , 0. , 8.3 , 0. , 0. , 0. ], [0. , 0. , 8.5 , 0. , 0. , 0. , 8.5 , 0. , 0. ], [0. , 0. , 0. , 8.6 , 0. , 0. , 0. , 8.6 , 0. ], [0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. , 8.65]])

In [52]: np.sum(C_data_mask*Cs, axis=0) Out[52]: array([7.8 , 8.3 , 8.5 , 8.6 , 7.8 , 8.3 , 8.5 , 8.6 , 8.65])

lib.models.model_ramp

pacman.lib.models.model_ramp.model_ramp(t, data, params, visit: float = 0.0)[source]

lib.models.polynomial1

pacman.lib.models.polynomial1.polynomial1(t, data, params, visit: float = 0.0)[source]

lib.models.polynomial2

pacman.lib.models.polynomial2.polynomial2(t, data, params, visit: float = 0.0)[source]

lib.models.exponential_visit

pacman.lib.models.exponential_visit.exponential_visit(t, data, params, visit: float = 0.0)[source]

lib.models.logarithmic_visit

pacman.lib.models.logarithmic_visit.logarithmic_visit(t, data, params, visit: float = 0.0)[source]

lib.models.upstream_downstream

pacman.lib.models.upstream_downstream.upstream_downstream(t, data, params, visit: float = 0.0)[source]

lib.models.divide_white

pacman.lib.models.divide_white.divide_white(t, data, params, visit=0)[source]

lib.models.transit

pacman.lib.models.transit.transit(t, data, params, visit: float = 0.0)[source]

lib.models.eclipse

pacman.lib.models.eclipse.eclipse(t, data, params, visit: float = 0.0)[source]

lib.models.sine1

pacman.lib.models.sine1.sine1(t, data, params)[source]

lib.models.sine2

pacman.lib.models.sine2.sine2(t, data, params, visit)[source]

lib.models.sine_curve

pacman.lib.models.sine_curve.get_phaselc(t, p, data, v_num)[source]

lib.models.uncmulti

pacman.lib.models.uncmulti.uncmulti(t, data, params, visit: float = 0.0)[source]