aboutsummaryrefslogtreecommitdiff
path: root/noao/onedspec/doc/sys
diff options
context:
space:
mode:
authorJoseph Hunkeler <jhunkeler@gmail.com>2015-07-08 20:46:52 -0400
committerJoseph Hunkeler <jhunkeler@gmail.com>2015-07-08 20:46:52 -0400
commitfa080de7afc95aa1c19a6e6fc0e0708ced2eadc4 (patch)
treebdda434976bc09c864f2e4fa6f16ba1952b1e555 /noao/onedspec/doc/sys
downloadiraf-linux-fa080de7afc95aa1c19a6e6fc0e0708ced2eadc4.tar.gz
Initial commit
Diffstat (limited to 'noao/onedspec/doc/sys')
-rw-r--r--noao/onedspec/doc/sys/1and2dspec.hlp66
-rw-r--r--noao/onedspec/doc/sys/Headers.hlp189
-rw-r--r--noao/onedspec/doc/sys/Onedspec.hlp2219
-rw-r--r--noao/onedspec/doc/sys/Review.hlp512
-rw-r--r--noao/onedspec/doc/sys/TODO28
-rw-r--r--noao/onedspec/doc/sys/coincor.ms46
-rw-r--r--noao/onedspec/doc/sys/identify.ms347
-rw-r--r--noao/onedspec/doc/sys/onedproto.ms1673
-rw-r--r--noao/onedspec/doc/sys/onedv210.ms680
-rw-r--r--noao/onedspec/doc/sys/revisions.v3.ms382
-rw-r--r--noao/onedspec/doc/sys/revisions.v31.ms329
-rw-r--r--noao/onedspec/doc/sys/revisions.v31.ms.bak307
-rw-r--r--noao/onedspec/doc/sys/rvidentify.ms304
-rw-r--r--noao/onedspec/doc/sys/sensfunc.ms83
-rw-r--r--noao/onedspec/doc/sys/specwcs.ms612
15 files changed, 7777 insertions, 0 deletions
diff --git a/noao/onedspec/doc/sys/1and2dspec.hlp b/noao/onedspec/doc/sys/1and2dspec.hlp
new file mode 100644
index 00000000..01f01763
--- /dev/null
+++ b/noao/onedspec/doc/sys/1and2dspec.hlp
@@ -0,0 +1,66 @@
+.help onedspec (Oct84) "Spectral Reductions"
+.ce
+Relationship Between Onedspec and Twodspec
+.ce
+Discussion
+.ce
+October 24, 1984
+.sp 3
+Two types of interactions between one dimensional and two dimensional
+spectra may be defined:
+
+.ls (1)
+Perform a one dimensional operation on the average or sum of a set
+of lines in a two dimensional image.
+.le
+.ls (2)
+Perform a one dimensional operation successively on a set of lines
+in a two dimensional image.
+.le
+
+The two functions might be combined as:
+
+.ls (3)
+Perform a one dimensional operation on the average or sum of a set
+of lines in a two dimensional image and apply the one dimensional
+result successively on a set of lines in a two dimensional image.
+.le
+
+Examples of this are dispersion solutions and flux calibrations for
+longslit spectra.
+
+ Some choices for implementation are:
+
+.ls (1)
+Use a 2-D to 1-D operator to create a 1-D spectrum by averaging or summing
+lines.
+.le
+.ls (2)
+Use an apply a 1-D arithmetic correction to a 2-D image operator.
+Alternatively, expand a 1-D correction to a 2-D correction.
+.le
+.ls (3)
+Convert the 2-D image to a group of 1-D images and provide the 1-D operators
+with the ability to perform averaging or summation.
+.le
+.ls (4)
+To perform a one dimensional operation successively on
+a set of lines first convert the two dimensional image into a group
+of one dimensional spectra. Perform the 1-D operation on the desired
+elements of the group and then reconstruct the 2-D image from the group
+of 1-D images.
+.le
+.ls (5)
+Built separate operators for 2-D images using the 1-D subroutines.
+.le
+.ls (6)
+Provide the ability in the 1-D operators to perform the desired 2-D
+operations directly.
+.le
+
+ Options (1) and (2) are essentially what is done on the IPPS. Option (5)
+would lessen the amount of development but increase the number of tasks
+to be written. I find option (6) desirable because of its
+increased generality but it would require a
+further definition of the data structures allowed and the syntax.
+.endhelp
diff --git a/noao/onedspec/doc/sys/Headers.hlp b/noao/onedspec/doc/sys/Headers.hlp
new file mode 100644
index 00000000..9bb394b7
--- /dev/null
+++ b/noao/onedspec/doc/sys/Headers.hlp
@@ -0,0 +1,189 @@
+.LP
+.SH
+Image Header Parameters
+.PP
+The ONEDSPEC package uses the extended image header to extract
+information required to direct processing of spectra. If the
+header information were to be ignored, the user would need to
+enter observing parameters to the program at the risk of
+typographical errors, and with the burden of supplying the
+data. For more than a few spectra this is a tedious job,
+and the image header information provides the means to eliminate
+almost all the effort and streamline the processing.
+.PP
+However, this requires that the header information be present,
+correct, and in a recognizable format. To meet the goal of
+providing a functional package in May 1985, the first iteration
+of the header format was to simply adopt the IIDS/IRS headers.
+This allowed for processing of the data which would be first
+used heavily on the system, but would need to be augmented at
+a later date. The header elements may be present in any order,
+but must be in a FITS-like format and have the following names
+and formats for the value fields:
+.sp 1
+.TS
+l c l
+l l l.
+Parameter Value Type Definition
+
+HA SX Hour angle (+ for west, - for east)
+RA SX Right Ascension
+DEC SX Declination
+UT SX Universal time
+ST SX Sidereal time
+AIRMASS R Observing airmass (effective)
+W0 R Wavelength at center of pixel 1
+WPC R Pixel-to-pixel wavelength difference
+NP1 I Index to first pixel containing good data (actually first-1)
+NP2 I Index to last pixel containing good data (last really)
+EXPOSURE I Exposure time in seconds (ITIME is an accepted alias)
+BEAM-NUM I Instrument aperture used for this data (0-49)
+SMODE I Number of apertures in instrument - 1 (IIDS only)
+OFLAG I Object or sky flag (0=sky, 1=object)
+DF-FLAG I Dispersion fit made on this spectrum (I=nr coefs in fit)
+SM-FLAG I Smoothing operation performed on this spectrum (I=box size)
+QF-FLAG I Flat field fit performed on this spectrum (0=yes)
+DC-FLAG I Spectrum has been dispersion corrected (0=linear, 1=logarithmic)
+QD-FLAG I Spectrum has been flat fielded (0=yes)
+EX-FLAG I Spectrum has been extinction corrected (0=yes)
+BS-FLAG I Spectrum is derived from a beam-switch operation (0=yes)
+CA-FLAG I Spectrum has been calibrated to a flux scale (0=yes)
+CO-FLAG I Spectrum has been coincidence corrected (0=yes)
+DF1 I If DF-FLAG is set, then coefficients DF1-DFn (n <= 25) exist
+.TE
+.PP
+The values for the parameters follow the guidelines adopted for
+FITS format tapes. All keywords occupy 8 columns and contain
+trailing blanks. Column 9 is an "=" followed by a space. The value field
+begins in column 11. Comments to the parameter may follow a "/" after
+the value field. The value type code is as follows:
+.RS
+.IP SX
+This is a sexagesimal string of the form '12:34:56 ' where the first
+quote appears in column 11 and the last in column 30.
+.IP R
+This is a floating point ("real") value beginning in column 11 and
+extending to column 30 with leading blanks.
+.IP I
+This is an integer value beginning in column 11 and extending to
+column 30 with leading blanks.
+.RE
+.sp 1
+.PP
+The parameters having FLAG designations all default to -1 to indicate
+that an operation has not been performed.
+The ONEDSPEC subroutines "load_ids_hdr" and "store_keywords" follow
+these rules when reading and writing spectral header fields.
+If not present in a header, load_ids_hdr will assume a value of zero
+except that all flags are set to -1, and the object flag parameter
+defaults to object.
+.PP
+When writing an image, only the above parameters are stored by store_keywords.
+Other header information is lost. This needs to be improved.
+.PP
+Not all programs need all the header elements. The following table
+indicates who needs what. Tasks not listed generally do not require
+any header information. Header elements not listed are not used.
+The task SLIST requires all the elements listed above.
+The task WIDTAPE requires almost all (except NP1 and NP2).
+The headings are abbreviated task names as follows:
+.sp 1
+.nr PS 8
+.ps 8
+.TS
+center;
+l l | l l | l l.
+ADD addsets COE coefs FIT flatfit
+BSW bswitch COM combine REB rebin
+CAL calibrate DIS dispcor SPL splot
+COI coincor FDV flatdiv STA standard
+.TE
+.sp 1
+.TS
+center, tab(/);
+l | l | l | l | l | l | l | l | l | l | l | l | l.
+Key/ADD/BSW/CAL/COI/COE/COM/DIS/FDV/FIT/REB/SPL/STA
+_
+HA// X////////// X/
+RA// X////////// X/
+DEC// X////////// X/
+ST// X////////// X/
+UT// X////////// X/
+AIRMASS// X////////// X/
+W0// X/ X/// X//// X/ X/ X/
+WPC// X/ X/// X//// X/ X/ X/
+NP1/////////// X///
+NP2/////////// X///
+EXPOSURE/ X/ X/// X/ X///// X///
+BEAM-NUM// X/ X//// X/ X/ X// X/ X//
+OFLAG// X////////// X/
+DF-FLAG//// X
+DC-FLAG// X//// X//// X/ X/ X/
+QD-FLAG//////// X/
+EX-FLAG// X/
+BS-FLAG// X/
+CA-FLAG/ X// X//////// X/
+CO-FLAG///// X//
+DFn//// X/
+.TE
+.nr PS 10
+.ps 10
+.bp
+.SH
+Headers From Other Instruments
+.PP
+The header elements listed above are currently created only when reading
+IIDS and IRS data from one of the specific readers: RIDSMTN and RIDSFILE.
+The time-like parameters, (RA, DEC, UT, ST, HA), are created in a
+compatible fashion by RCAMERA and RFITS (when the FITS tape is written
+by the KPNO CCD systems).
+.PP
+For any other header information, the ONEDSPEC package is at a loss
+unless the necessary information is edited into the headers with
+an editing task such as HEDIT. This is not an acceptable long term
+mode of operation, and the following suggestion is one approach to
+the header problem.
+.PP
+A translation table can be created as a text file which outlines
+the mapping of existing header elements to those required by the
+ONEDSPEC package. A mapping line is needed for each parameter
+and may take the form:
+.sp 1
+.RS
+.DC
+1D_param default hdr_param key_start value_start type conversion
+.DE
+.RE
+where the elements of an entry have the following definitions:
+.TS
+center;
+l l.
+1D_param T{The name of the parameter expected by the ONEDSPEC package,
+such as EXPOSURE, OFLAG, BEAM-NUM. T}
+
+default T{A value to be used if no entry is found for this parameter.T}
+
+hdr_param T{The string actually present in the existing image header to be
+associated with the ONEDSPEC parameter. T}
+
+key_start T{The starting column number at which the string starts
+in the header. T}
+
+value_start T{The starting column number at which the string describing the
+value of the parameter starts in the header. T}
+
+type T{The format type of the parameter: integer, real, string, boolean,
+sexagesimal. T}
+
+conversion T{If the format type is string, a further conversion may
+optionally be made to one of the formats listed under type. T}
+.TE
+.sp 1
+.PP
+A translation file can be built for each instrument and its
+peculiar header formats, and the file name associated with a
+package parameter. The two subroutines in ONEDSPEC dealing
+directly with the headers (load_ids_hdr and store_keywords)
+can be modified or replaced to access this file and
+translate the header elements.
+.endhelp
diff --git a/noao/onedspec/doc/sys/Onedspec.hlp b/noao/onedspec/doc/sys/Onedspec.hlp
new file mode 100644
index 00000000..85a3f20e
--- /dev/null
+++ b/noao/onedspec/doc/sys/Onedspec.hlp
@@ -0,0 +1,2219 @@
+.help spbasic
+.sh
+One Dimensional Package - Basic Operators
+
+.sh
+INTRODUCTION
+
+ The IRAF One Dimensional Package is intended to provide the basic
+tools required to reduce, analyze, and display data having a
+single dimension. This primarily refers to spectra, but may have
+applicability to time series photometry, or any other
+source of data which can be considered a simple vector.
+All such data will be referred to as spectra in the following discussion.
+Furthermore, the spectrum vector is assumed to be equally spaced
+along the independent variable (wavelength, channel, frequency,
+wavenumber,...). For the purposes of discussion, the independent
+variable will be referred to as wavelength but may be any of the
+possible physical transformations.
+
+ Spectra are to be stored as 2 dimensional IRAF floating point images
+having a single line
+and are therefore limited to lengths smaller than or equal to the
+largest representable positive integer. For 32 bit machines, this
+is about 2 billion points, so that disk space will likely be the
+operational limit. The precision and dynamic range for each pixel
+will be determined by the local machine.
+The second dimension of the spectrum is spatial, and therefore
+represents a special case of the long slit spectroscopic mode.
+
+ Each spectrum will, by default, be stored as a separate image
+file. Alternatively, an association
+can be declared for a related set of spectra
+through a "data group" mechanism. A data group can be defined to
+contain any number of related spectra so that an operation can
+be specified for the group. For example, one can group a single
+night of IIDS spectra into a group labeled JAN28, and then
+wavelength linearize JAN28. This helps minimize
+the user interaction which would otherwise be repetitive, and
+also reduces the user bookkeeping required.
+
+ Data input to the package is provided through the DATAIO
+package. Tape readers will be provided for FITS, IIDS and IRS mountain
+formats, Text ("card-image"), REDUCER and PDS. The descriptor fields
+included in these formats will be mapped into standard IRAF
+image header fields when possible. Special fields will be
+added to the image header to represent instrument
+related parameters.
+
+ Data output to tape (for visitor take home) will be
+either in FITS or text format.
+
+ A variety of graphics display options will be provided
+for both interactive use and for hardcopy generation.
+Scale expansion and contraction, labeling, multiple spectra
+plots, and axis limit specification are to be included in the
+options.
+
+ Specific reduction scripts will be provided to efficiently
+process raw data from the Kitt Peak instruments IIDS and IRS.
+
+
+.sh
+SCOPE OF SPECIFICATIONS
+
+This paper specifies the command format, parameters, and
+operations for the Basic contents of the One Dimensional
+Spectral Package. The Basic functions are those comprising the
+minimum set to reduce a large variety of spectra.
+More complicated operators and analysis functions
+are described in a companion paper on Intermediate Functions.
+Major projects in spectral analysis will be considered at
+a later date in the Advanced function set.
+
+The primary functions within the Basic operator set are:
+
+.ls 4 Transport
+Primarily magtape readers for the common tape formats. Included
+are FITS, IIDS/IRS, REDUCER, PDS, and Card-image formats.
+Tape writers will be initially limited to FITS and Card-image.
+.le
+.ls 4 Mathematical
+Add, subtract, multiply, divide spectra by spectra or constants.
+Apply functional operators such as log, exp, sqrt, sin, cos.
+Weighted sums and averages of spectra.
+.le
+.ls 4 Reduction operators
+Line identification, dispersion solution, flux calibration,
+coincidence correction, atmospheric extinction correction,
+flat fielding.
+.le
+.ls 4 Plotting
+Terminal package to expand, overplot, annotate plots. Hard
+copy package for printer/plotters.
+.le
+.ls 4 Utilities
+Header examination and modification. List, copy, delete spectra.
+Define, add, delete entries in a data group.
+.le
+.ls 4 Artificial spectra
+Generate ramps, Gaussian and Voigt lines, noise.
+.le
+
+These functions will be considered in detail in the following
+discussion.
+
+.ks
+A summary of the commands is given below:
+
+.nf
+rfits -- Convert FITS data files to IRAF data files
+riids -- Convert IIDS mountain tape format to IRAF data files
+rreducer -- Convert Reducer format tape to IRAF data files
+rpds -- Convert a PDS format tape to IRAF data files
+rtext -- Convert a card-image text file to an IRAF image file
+wfits -- Convert IRAF data files to FITS data format
+wtext -- Convert an IRAF image file to a card-image text file
+.sp 1
+coin_cor -- Correct specified spectra for photon coincidence
+line_list -- Create a new line list, or modify an existing one
+mlinid -- Manually identify line features in a spectrum
+alinid -- Automatically locate spectral features in a spectrum
+disp_sol -- Determine the dispersion relation for a set of spectra
+disp_cor -- Linearize spectra having dispersion relation coefficients
+cr_flat -- Create a flat field spectrum
+flt_field -- Correct spectra for pixel-to-pixel variations
+std_star -- Define the standard stars to be used for solving the
+ extinction and system sensitivity functions
+crext_func -- Create an extinction function from a set of observations
+crsens_func -- Create system sensitivity function
+ext_cor -- Extinction correct specified spectra
+sens_cor -- Correct the specified spectra for system sensitivity
+.fi
+.ju
+.ke
+
+.bp
+.sh
+TRANSPORT - INPUT
+
+Although the primary data input source for the near future
+will be magtape, direct links from other computers will
+be a likely source of input. The IRAF DATAIO package
+treats magtape as simple bit streams so that alternate
+input devices (e.g. disk, ethernet, phone lines) can also
+be accommodated with no programming modifications.
+
+This section describes the different formats to be made
+available in the initial release of the Spectroscopic
+package. Additional formats may be added if needed.
+
+In general, the following information will be copied to
+the standard image header: length of spectrum, title,
+abscissa units, brightness units, reference pixel
+abscissa value and increment, right ascension and declination
+of telescope.
+
+Non-standard header parameters include but are not limited to:
+integration time, UT and LST of the observation, airmass (or
+zenith distance), processing history, and comments.
+
+.sh
+FITS
+.ih
+NAME
+rfits -- Convert FITS data files to IRAF data files
+.ih
+USAGE
+rfits [source, filename, files]
+.ih
+DESCRIPTION
+FITS data is read from the specified source.
+The FITS header may optionally be printed on the standard
+output as either a full listing or a short description. Image data may
+optionally be converted to an IRAF image of specified data type.
+
+Eventually all data from the mountain will be in FITS format,
+with the exception of time-critical data transfer projects
+and special applications. The IRAF FITS reader will
+copy the data to disk for most applications.
+
+.ih
+PARAMETERS
+.ls 4 fits_source
+The FITS data source. If the data source is a disk file or an explicit tape file
+specification of the form mt*[n] where n is a file number then only that file
+is converted. If the general tape device name is given, i.e. mta, mtb800, etc,
+then the files specified by the files parameter will be read from the tape.
+.le
+.ls filename
+The IRAF file which will receive the FITS data if the make_image parameter
+switch set. For tape files specified by the files parameter the filename
+will be used as a prefix and the file number will be appended. Otherwise,
+the file will be named as specified. Thus,
+reading files 1 and 3 from a FITS tape with a filename of data will produce
+the files data1 and data3. It is legal to use a null filename. However,
+converting a source without a file number and with a null filename will cause
+a default file fits to be created.
+.le
+.ls files
+The files to be read from a tape are specified by the files string. The
+string can consist of any sequence of file numbers separated by
+at least one of whitespace, comma, or dash.
+A dash specifies a range of files. For example the string
+
+1 2, 3 - 5,8-6
+
+will convert the files 1 through 8.
+.le
+.ls print_header
+If this switch is set header information is printed on the standard output
+output. (default = yes)
+.le
+.ls short_header
+This switch controls the format of the header information printed when the
+print_header switch is set.
+When the short_header switch is set only the output filename,
+the FITS OBJECT string, and the image dimensions are printed.
+Otherwise, the output filename is followed by the full FITS header.
+(default = yes)
+.le
+.ls bytes_per_record
+The FITS standard record size is 2880 bytes which is the default for this
+parameter. However, non-standard FITS tapes with different record sizes can
+be read by setting the appropriate size.
+.le
+.ls make_image
+This switch determines whether FITS image data is converted to an IRAF image
+file. This switch is set to no to obtain just header information with the
+print_header switch. (default = yes)
+.le
+.ls data_type
+The IRAF image file may be of a different data type than the FITS image data.
+The data type may be specified as s for short, l for long, and r for real.
+The user must beware of truncation problems if an inappropriate data type is
+specified. If the FITS keywords BSCALE and BZERO are found then the image
+data is scaled appropriately. In this case the real data type may be most
+appropriate.
+.le
+.sh
+For spectroscopic applications, the parameter data_type would be
+specified as r for real, and the filename would probably be assigned
+as the "group" name as well. (see section on data groups.)
+
+
+.sh
+IIDS/IRS
+.ih
+NAME
+riids -- Convert IIDS mountain tape format to IRAF data files
+.ih
+USAGE
+riids [source, filename, form, records]
+.ih
+DESCRIPTION
+IIDS/IRS mountain format data is read from the specified source.
+The header may be printed
+on the standard output either in short form, label only, or a long
+form containing telescope and time information, processing flags,
+and wavelength solution values.
+
+Either raw or "mountain reduced" tapes can be specified with the
+parameter form.
+
+The IIDS format is destined for extinction. A FITS format will
+replace the current tape format, but an interim period will exist
+for which this tape reader must exist.
+.ih
+PARAMETERS
+.ls 4 iids_source
+The data source, either magtape or a data stream (e.g. disk file).
+The current IIDS tape format produces tapes having only a single
+file. If the source is a magtape, the general tape specification
+mt*[n], should either have n specified as 1, or [n] should not be present.
+.le
+.ls 4 filename
+The IRAF file which will contain the data if the make_image parameter
+is set. The filename will be used as a prefix and the record number
+will be used as the suffix. Thus reading records 1 through 100 from
+an IIDS tape with a file name of 'blue' will produce 100 files having
+names blue1, blue2, ..., blue100. A null filename will default to 'iids'.
+.le
+.ls 4 form
+This string parameter defines the tape to be either 'new' or 'red'.
+The 'new' designation refers to tapes made after January 1977, and
+'red' refers to mountain reduced tapes. (default = 'red')
+.le
+.ls 4 records
+The records specified by this string parameter will be copied to disk.
+The syntax is identical to that for the files parameter of the FITS reader.
+.le
+.ls 4 print_header
+If this switch is set, header information is printed on the standard
+output. (default = yes)
+.le
+.ls 4 short_header
+If this switch is set, only the filename and label information will be printed
+if the print_header switch is also set. If set to 'no', the long form
+will be printed. (default = yes)
+.le
+.ls 4 make_image
+See definition of this parameter under FITS.
+.le
+
+
+.sh
+REDUCER
+
+REDUCER tapes require several considerations beyond the
+previous simple formats. The spectra actually consist of
+many spectra having lengths of 4096 but slightly different
+spectral sampling. Thus, the reader can create many small
+independent spectra, or interpolate the data onto a common
+spectral scale to create a single large spectrum.
+The latter alternative seems to be more generally useful,
+unless the interpolation process introduces significant errors.
+Probably the initial reader will provide both options.
+
+A second consideration is the 60 bit word length conversion.
+The IRAF images are limited to 32 bit reals on most 32 bit machines.
+Some loss of precision and dynamic range will result while reading REDUCER
+format data.
+
+Also, there may be a considerable number (~100) of non-standard header
+elements. These can be handled in a normal fashion, and tools
+will be provided to extract or modify these elements as needed.
+New elements may be added as well.
+
+.ih
+NAME
+rreducer -- Convert Reducer format tape to IRAF data files
+.ih
+USAGE
+rreducer [source, filename, files]
+.ih
+DESCRIPTION
+REDUCER format data is read from the specified source.
+The header may be printed on the standard output either in short form
+consisting of the 80 character ID field, or a long form containing some
+selection (to be agreed upon) of the many header elements.
+
+Either a single long spectrum requiring interpolation
+to match the spectral characteristics of the first data block, or
+multiple short spectra having individual spectral parameters can
+be specified with the hidden parameter, interp.
+Interpolation is performed via a fifth order polynomial.
+
+Subsets of the spectrum can be selected with the blocks string
+parameter. This specifies which blocks in the file are to be extracted.
+
+.ih
+PARAMETERS
+.ls 4 reducer_source
+The data source, either magnetic tape or a data stream (e.g. disk
+file). See the definition of fits_source above for a description
+of how this parameter interacts with the files parameter.
+.le
+.ls 4 filename
+The filename which will contain the data.
+See the definition of this parameter under FITS.
+If no name is given, the default of 'reducer' will be used.
+.le
+.ls 4 files
+The files to be read from tape are given by the files string. See
+the description of this parameter under FITS.
+.le
+.ls 4 print_header
+If this switch is set header information will be printed on the
+standard output. (default = yes)
+.le
+.ls 4 short_header
+If this switch is set only the filename and the first 60 characters
+of the 80 character ID field will be printed if the print_header
+switch is also set. If set to no, the long form of the header
+will be printed, containing selected elements of the 100 word
+header record. (default = yes)
+.le
+.ls 4 make_image
+See the definition of this parameter under FITS.
+.le
+.ls 4 interp
+If this switch is set, a single long spectrum is produced. If
+set to no, multiple spectra will be generated, one for each
+header-data block. The resulting filenames will have suffixes
+of '.1' , '.2' ... '.n'. For example, if the given filename is
+fts and the tape file is 2, the resulting spectrum will be
+fts2 if interp is set to yes, but will be fts2.1, fts2.2, and
+fts2.3 if there are 3 header-data block sets and interp is set
+to no. (default = yes).
+.le
+.ls 4 blocks
+This string parameter allows selected extraction of the
+specified header-block sets, rather than the entire spectrum.
+Thus subsets of the spectrum may be extracted. The parameter
+specifies the starting block and ending block within a tape file.
+If an end-of-file is found prior to exhaustion of the
+specification, reading is terminated.
+For example, the string '12 19' specifies that the eight sets
+starting with the twelfth block are to be extracted to
+form the spectrum. (default = '1 32767', or all)
+.le
+
+
+.sh
+PDS
+
+Tapes from the new PDS 11/23 system will be either FITS or
+old format PDS 9 track tapes. This reader will accept the
+old format tapes which are based on the PDP 8 character set
+and either 10 or 12 bit format.
+
+.ih
+NAME
+rpds -- Convert a PDS format tape to IRAF data files
+.ih
+USAGE
+rpds [source, filename, files]
+.ih
+DESCRIPTION
+PDS format data is read from the specified source. The header
+may be printed on the standard output either in short form
+consisting of the 40 character ID field, filename, and size,
+or in long form including raster parameters and origin.
+
+Because PDS data is limited to no more than 12 bit data, the output image
+will be short integers if the number of lines ("scans") implies
+two dimensional data. If one dimensional data is implied, the
+output image will be converted to reals.
+.ih
+PARAMETERS
+.ls 4 pds_source
+The data source, either magtape or a data stream. See the definition
+of fits_source above for a description of how this parameter interacts
+with the files parameter.
+.le
+.ls 4 filename
+If no filename is given, the default of 'pds' will be used.
+.le
+.ls 4 files
+See the definition of this parameter under FITS.
+.le
+.ls 4 print_header
+If this switch is set, header information will be printed on the
+standard output. (default = yes).
+.le
+.ls 4 short_header
+If this switch is set, only the filename, size, and the 40 character ID
+field will be printed if the print_header switch is also set.
+If set to no, the long form of the header will be printed
+containing the full information block (delta X, delta Y, scan type,
+speed, origin, corner, travel). (default = yes)
+.le
+.ls 4 make_image
+See the definition of this parameter under FITS. (default = yes)
+.le
+.ls 4 data_type
+Specifies the IRAF image file output data type. Normally one
+dimensional PDS data (NSCANS=1) will be stored as real and
+two dimensional PDS data (NSCANS>1) will be stored as short.
+The data type may be specified as s (short), l (long), or r
+(real).
+.le
+
+
+.sh
+TEXT (Read Card-Image)
+
+Card-image tapes are probably the most portable form of data transport.
+Unlike FITS, there is no standard for internally documenting the
+contents of the text file. Header information is essentially
+lost. This makes card-image data transfer a relatively unattractive
+format.
+
+
+.ih
+NAME
+rtext -- Convert a card-image text file to an IRAF image file.
+.ih
+USAGE
+rtext [source, filename, files, ncols, nlines, label]
+.ih
+DESCRIPTION
+The card-image text file specified by the source parameter is
+converted to an IRAF image file. The file is read in a free form
+mode (values separated by spaces) converting data along lines (1-ncols) first.
+No header information is stored except for the image size and
+the label.
+
+If additional header information is to be stored, the standard
+image header utility must be used.
+
+Pixel values exactly equal to some constant will be assumed to be blanks
+if the blank switch is set to yes. The flag value for blanks can be
+set with the blank_value parameter.
+
+.ih
+PARAMETERS
+.ls 4 text_source
+The input data source. See the definition of this parameter under FITS.
+.le
+.ls 4 filename
+The IRAF file which will contain the image data if the make_image
+switch is set. If no filename is given, the default of 'text'
+will be used.
+.le
+.ls 4 files
+See the definition of this parameter under FITS.
+.le
+.ls 4 ncols
+The number of columns of data which describe the image extent.
+.le
+.ls 4 nlines
+The number of lines (or 'rows') of data which describe the image extent.
+For one dimensional spectra, this parameter will be 1.
+.le
+.ls 4 label
+This string parameter becomes the image identification label.
+Up to 80 characters may be stored.
+.le
+.ls 4 print_header
+If this switch is set, header information consisting of the filename,
+image label, and image size will be printed on the standard output.
+(default = yes)
+.le
+.ls 4 make_image
+If this switch is set, an IRAF image will be created. (default = yes)
+.le
+.ls 4 data_type
+The IRAF image may be either s (short), l (long), or r (real).
+(default = r)
+.le
+.ls 4 card_length
+The number of columns on the "card" in the card-image file.
+(default = 80)
+.le
+.ls 4 blank_value
+The value used to flag blank pixels if the blank switch is set to yes.
+(default = -32767)
+.le
+.ls 4 blank
+If this switch is set to yes, any pixel having exactly the value
+specified by the parameter blank_value will be flagged as a blank
+pixel. If set to no, all pixel values are assumed to be valid.
+.le
+
+
+.bp
+.sh
+TRANSPORT - OUTPUT
+
+The primary format for take away tapes will eventually be FITS.
+Because many facilities currently cannot read FITS format,
+the card-image format will also be provided.
+
+.sh
+FITS
+.ih
+NAME
+wfits -- Convert IRAF data files to FITS data format
+.ih
+USAGE
+wfits [destination, filename, files]
+.ih
+DESCRIPTION
+Data is read from the specified filename(s) and written to the
+destination, usually a magnetic tape specification.
+A short header consisting of the filename, size, and label
+may optionally be printed on the standard output.
+
+The data will be automatically scaled to either 16 or 32 bit integer format
+(BITPIX = 16 or 32) depending on the number of bits per pixel in the
+image data, unless the bitpix parameter is specified
+otherwise. The scaling parameters may be forced to
+exactly represent the original data (BSCALE = 1.0, BZERO = 0.0)
+by setting the scale switch to no.
+
+If only the header information is to be copied to the destination,
+the write_image parameter can be set to no. If this is the case,
+then the NAXIS FITS keyword will be assigned the value of 0;
+otherwise the value for
+NAXIS will be taken from the IRAF image header.
+
+Each non-standard header element will be written into the FITS file
+in a form to be determined. These elements may be entered as FITS
+COMMENT records, or perhaps added to the file as FITS "special
+records".
+
+Other keywords will be written following standard FITS specifications.
+A few special cases will be set as follows:
+
+.ls 4 NAXISn
+The NAXIS1, NAXIS2, ... NAXISn values will be taken from the
+image header
+.le
+.ls 4 OBJECT
+The first 60 characters of the image label will be used.
+.le
+.ls 4 BLANK
+Blank pixels will be written to tape having the IRAF value for
+indefinite appropriate to 8, 16, or 32 bit integers.
+.le
+.ls 4 ORIGIN = 'KPNO IRAF'
+.le
+
+.ih
+PARAMETERS
+.ls 4 fits_destination
+The data destination, usually a magnetic tape, but may be a disk
+file or STDOUT. If magtape,
+the tape should be specified with a file number of either 1
+or "eot". The file number refers to the file which will be written.
+Thus a file number of 2 would overwrite file 2. If the tape already
+has data written on it, the safest specification would be "eot".
+This forces the tape to be positioned between the double end-of-tape
+marks prior to writing.
+.le
+.ls 4 filename
+The IRAF filename providing the root for the source name. The files
+string, if given, will be used as the suffix for the file names
+to be written to tape. For example, if the filename is given as
+"image", and the files string is "1 -5", then files image1, image2,
+image3, image4, and image5 will be written to the destination
+in FITS format. If the files string is empty, only the specified
+filename will be converted.
+.le
+.ls 4 files
+See the definition of this parameter under the FITS reader.
+.le
+.ls 4 print_header
+If this switch is set, a short header will be printed on the
+standard output for each image converted. (default = yes)
+.le
+.ls 4 write_image
+If this switch is set to no, only header information will be
+written to the destination, but no image data.
+By using this parameter,
+one can generate a FITS tape containing header information only
+and may be used as a means for examining the IRAF image header
+or for generating a table of contents on a tape prior to writing
+the data. (default = yes)
+.le
+.ls 4 bitpix
+This parameter must be either 8, 16, or 32 to specify the
+allowable FITS pixel sizes.
+.le
+.ls 4 scale
+If this switch parameter is set to no, the FITS scaling
+parameters BSCALE and BZERO will be set to 1.0 and 0.0
+respectively. The data will be copied as it appears in the
+original data, with possible loss of dynamic range.
+Values exceeding the maximum value implied by the bitpix
+parameter will be set to the maximum representable value.
+(default = yes)
+.le
+
+
+.sh
+TEXT (Write Card-Image)
+
+Although this format is easily readable by the destination
+machine, there is no real standard for encoding information,
+neither the image data itself nor the descriptive parameters.
+
+.ih
+NAME
+wtext -- Convert an IRAF image file to a card-image text file
+.ih
+USAGE
+wtext [destination, filename, files]
+.ih
+DESCRIPTION
+Data is read from the specified filename(s) and written to
+the destination, usually a magnetic tape. The data will be
+blank padded, ASCII in a format consistent with the data type
+of the image pixels, (integer or floating point).
+A short header description, consisting of the filename
+being converted and the image label, may optionally be printed
+on the standard output.
+
+The column length of the "card" may be changed from the default
+of 80 using the card_length parameter, and the field width
+to be allocated for each data element may be changed from the
+default of 10 columns by setting the field_width parameter.
+
+If the data are integers, the equivalent of the FORTRAN format
+I<field_width> will be used;
+if the data are reals, the equivalent of the FORTRAN format
+1P<n>E<field_width>.3
+will be used, where n is the number of elements which can
+be output into one card length. For the default values of
+card_length = 80, and field_width = 10, n will be 8. (1P8E10.3).
+
+Several cards may be written as a single "block" for
+improving the efficiency on magtape. Reasonable efficiency (80 percent)
+is attained with a blocking factor of 50, but this value
+may be modified by changing the parameter blocking_factor.
+If the last block is unfilled, it will be truncated to the
+minimum number of card images required to flush the data.
+
+A legitimate value must be defined to represent blank pixels.
+The parameter blank_value is used to define this value and
+defaults to -32767.
+
+.ih
+PARAMETERS
+.ls 4 text_destination
+See the definition for fits_destination for a description of this
+parameter.
+.le
+.ls 4 filename
+See the definition of this parameter under RFITS.
+.le
+.ls 4 files
+See the definition of this parameter under RFITS.
+.le
+.ls 4 print_header
+If this switch is set, a short header is printed for each
+file converted. (default = yes)
+.le
+.ls 4 card_length
+The number of columns on the "card" to be generated. (default = 80)
+.le
+.ls 4 field_width
+The number of columns on the "card" to be allocated for each pixel value.
+(default = 10)
+.le
+.ls 4 blocking_factor
+The number of card images to be written as a single blocked record.
+(default = 50)
+.le
+.ls 4 blank_value
+The value to be assigned to blank pixels for the purpose of
+representing them on the card image. (default = -32767)
+.le
+.bp
+
+
+.sh
+MATHEMATICAL OPERATORS
+
+Because spectra are stored as IRAF images, the standard image
+calculator utility provides the basic arithmetic services.
+For example, to create a spectrum (called spavg) which is the average of two
+other spectra (sp1 and sp2), one can enter the command:
+.ls 8 cl>imcalc "spavg = (sp1 + sp2) / 2"
+.le
+
+Other arithmetic operations are performed in a similar fashion.
+The general form of the command string is
+output_image = expression where "expression" may consist of:
+.ls 8 1. Spectra or segments of spectra
+A segment of a spectrum is specified by the notation spectrum[x1:x2]
+where x1 and x2 are pixel indices along the spectrum. For example,
+to create a spectrum which is the difference of the first 100
+pixels of two other spectra, the following command would be used:
+.ls 16 cl> imcalc "spdiff = sp1[1:100] - sp2[1:100]"
+.le
+An option to specify wavelength delineated segments may be added
+if this appears generally feasible.
+.le
+.ls 8 2. Numeric constants
+.le
+.ls 8 3. Data group names
+If an operation is performed on a data group, the output
+will be a new data group containing spectra which have been
+individually treated by the specified calculation.
+For example, if JAN28 is a group containing 100 congruent spectra
+and response is the instrumental response as a function of
+wavelength as determined from a set of standards, then
+the after the following command is entered:
+.ls 16 cl> imcalc "JAN28X = JAN28 * response"
+.le
+
+a new data group will be generated containing 100 spectra which
+have been calibrated for the instrument response. The new spectra will
+be given names JAN28X1 through JAN28X100.
+.le
+.ls 8 4. Intrinsic functions
+.ks
+The following intrinsic functions are to be provided:
+
+.nf
+ abs atan2 cos int min sin
+ acos ceil cosh log mod sinh
+ aimag char double log10 nint sqrt
+ asin complex exp long real tan
+ atan conjug floor max short tanh
+.fi
+.ke
+.le
+
+Expression elements are to be
+separated by arithmetic and boolean operators (+,-,*,/,**,<,>,<=,=>,==,!,!=).
+The boolean operators provide a means to generate masks.
+
+Rules governing operations on non-congruent spectra are not yet fully defined.
+.bp
+
+.sh
+REDUCTION OPERATORS
+
+Most of the reduction operators discussed in this section are
+intended for spectra of the IIDS/IRS class, although they
+are sufficiently general to accommodate data obtained with
+the CryoCam (either multi-aperture or long-slit mode), Echelle,
+Coude Feed, and photographic (PDS) instruments. Some
+application to FTS data is also feasible.
+
+It is intended that many of these operators will never be
+directly executed by users, but that they will be driven by
+CL command scripts tuned for individual instruments.
+In some cases the scripts will be fairly elaborate and extensive
+to lead new users through the reduction phase along a reliable
+path.
+
+It will no doubt be necessary to either modify some
+of these operators, or create more specific operators for
+certain other instruments. These operators should be considered
+a sample of what will eventually be available in this package.
+
+The basic path which most spectroscopic data follows is:
+
+.ls 4 1.
+Coincidence Correction.
+.ls
+Many detectors can respond to incoming photevents at a limited
+rate. Once an event occurs, the detector cannot respond for some
+instrument dependent period, or dead-time. If events occur during
+this period, they will not be counted. If the event rate
+does not greatly exceed the detector limits, the uncounted events
+can be corrected for statistically.
+
+For many detectors, the coincidence correction is a well
+determined function and can be applied to the raw data
+to produce a reasonably corrected spectrum.
+.le
+.le
+.ls 4 2.
+Wavelength linearization.
+.ls
+Few instruments produce spectra having pixel to pixel wavelength
+differences which are constant across the entire spectrum.
+For subsequent reduction and analysis purposes, it is
+desirable to rectify the spectra. This is done by mapping the spectrum
+from the non-linear wavelength coordinate to a linear one.
+It is also desirable to provide a means of forcing the mapping
+to a grid which is common to many observations, and in some cases,
+to observations acquired with other instruments as well.
+
+The processes required for the mapping are outlined below.
+
+.le
+.ls 4 a.
+Manually identify a small number of spectral features having
+known wavelengths thereby creating a table of wavelength as
+a function of pixel number.
+.le
+.ls 4 b.
+Compute estimated relationship between wavelength and pixel number
+.le
+.ls 4 c.
+Automatically locate many features found in a user definable line list.
+Optionally locate additional features from other spectra using an alternate
+line list. (This allows spectra from several different sources to be used
+for the wavelength calibration, such as arc lamps, night/day sky.)
+.le
+.ls 4 d.
+Compute improved relationship between wavelength and pixel number.
+.le
+.ls 4 e.
+Perform 2.c. and 2.d. for all other spectral entries in the wavelength
+calibration data group.
+.le
+.ls 4 f.
+Compute relationship for wavelength as a function of pixel number and time (or
+zenith distance, or some other flexure parameter) as deduced from 2.e.
+.le
+.ls 4 g.
+Apply inverse of wavelength function to a data group. This requires
+interpolation of the data at pixels having fixed steps in wavelength.
+The start wavelength and the step size must be user definable.
+The interpolation may be via a polynomial of a user specified order (typically
+1 to 5), or a more sophisticated interpolator. The linearization
+in wavelength may also be a simple rebinning of the data to exactly preserve
+photon statistics.
+.le
+.le
+.ls 4 3.
+Field flattening.
+.ls
+Pixel to pixel sensitivity variations and other small scale
+fluctuations are removed by dividing the object spectra by the spectrum of
+a continuum source. The latter spectrum should have a very high
+signal-to-noise ratio so as not to introduce additional uncertainties
+into the data.
+
+If the spectrum of the continuum source has much low frequency
+modulation,
+it may be necessary to filter these variations before the division is performed.
+Otherwise fluctuations not characteristic
+of the instrument response may be introduced, and may be difficult to remove
+during the subsequent flux calibration process.
+.le
+.le
+.ls 4 4.
+Sky Subtraction
+.ls
+Except for extremely bright sources, all spectra require that the
+spectrum of the night sky be removed. In some cases, sky will
+be the dominant contributor to the raw spectrum.
+Sky subtraction is a simple subtraction operation and can be
+accomplished with the image calculator tools.
+.le
+.le
+.ls 4 5.
+Extinction Correction
+.ls
+The effects of the Earth's atmosphere produce a wavelength dependent
+reduction of flux across the spectrum. The extinction function
+is approximately known from extensive photometric measurements
+obtained at the observatory over a period of many years. But on
+any given night this function may deviate from the average, sometimes
+significantly. If the spectroscopic observer has acquired the necessary
+data, it is possible to solve for the extinction function directly.
+
+Therefore, it should be possible for the user to either derive the
+extinction function, input a user-defined function, or use the
+standard average function and subsequently correct spectra for the
+effects of the atmosphere as described by that function and the effective
+observing airmass. (Note that because exposures may be quite long, the
+effective airmass must be calculated as a function
+of position on the sky.)
+.le
+.le
+.ls 4 6.
+Flux Calibration (Correction for Instrument Response)
+.ls
+By observing objects having known wavelength dependent flux
+distributions, it is possible to determine the sensitivity
+variations of the instrument as a function of wavelength.
+Usually several standards are observed for each group of data
+and these must be averaged together after corrections for
+"grey shift" variations (wavelength independent flux reductions
+such as those introduced by thin clouds).
+
+Although the actual flux of the standards is generally known only
+for a limited selection of wavelengths, the instrument response
+usually varies smoothly between those wavelengths and a smooth
+interpolator generally provides satisfactory calibration values
+at intermediate wavelengths.
+
+In some cases, the system sensitivity response may be known
+from other observations, and the user will be allowed to directly
+enter the sensitivity function.
+.le
+.le
+
+The above reduction path is primarily tuned to IIDS/IRS style data.
+Other instruments may require additional or alternate steps.
+It may be necessary for multiaperture Cryocam spectra, for example,
+to undergo an additional hole to hole sensitivity correction
+based on the total sky flux through each hole.
+
+The tasks performing the procedures outlined above will be described
+in more detail in the following discussion.
+
+.sh
+COINCIDENCE CORRECTION
+.ih
+NAME
+coin_cor -- Correct specified spectra for photon coincidence
+.ih
+USAGE
+coin_cor [filename, files, destination, dead_time]
+.ih
+DESCRIPTION
+The spectra specified by the root filename and the files parameter
+are corrected for photon counting losses due to detector dead-time.
+The corrected spectra are written to filenames having the root
+specified by the destination.
+
+The correction, if typical of photomultiplier discriminators,
+is usually of the form:
+
+.br
+ Co(i) = C(i) exp[C(i) dt],
+.br
+ dt = t/T,
+.br
+
+where Co(i) is the corrected count at pixel i, C(i) is the raw count,
+t is the detector/discriminator dead-time, and T is the
+exposure time at pixel i.
+
+Clearly, the correction factor can become extremely large when the
+count rate, C(i)/T, is large compared with the dead-time, t.
+The above formula cannot be expected to
+exactly remove the effects of undetected photo-events when
+large corrections are required.
+
+The exposure time will be read from the image header.
+If no value exists, or if the value is less than or equal to
+zero, a request from standard input will be issued for this parameter.
+
+Because each detector may have unique coincidence properties,
+this routine may be package dependent.
+.ih
+PARAMETERS
+.ls 4 filename
+See the definition of this parameter under RFITS.
+.le
+.ls 4 files
+See the definition of this parameter under RFITS.
+.le
+.ls 4 destination
+The IRAF filename providing the root for the name of the result
+spectra. The files parameter, if specified, will be used for the
+suffix. If the filename parameter is actually a data group name,
+the destination name will be used to create a new data group
+containing spectra having IRAF filenames with the destination
+group name as a root and a suffix starting with 1 and incremented for
+each converted spectrum.
+.le
+.ls 4 dead_time
+The value of this parameter, in seconds, represents the detector
+dead-time.
+.le
+.ls 4 print_header
+If this switch is set, a short header will be printed on the
+standard output for each spectrum corrected. (default = yes)
+.le
+.ls 4 exposure
+This parameter should be entered into the image header. If not
+present or not realistic, a request is made from standard input.
+.le
+
+.sh
+WAVELENGTH LINEARIZATION
+
+A package of routines is required to perform the operations
+leading to linearized data. These include:
+.ls 4 1. Spectral line list definition and editing facility
+.le
+.ls 4 2. Manual line identifier using graphics cursor.
+.le
+.ls 4 3. Automatic line identifier using preliminary identifications
+from manual identifier and locating lines from the predefined list.
+.le
+.ls 4 4. Computation of dispersion relationship as a function of
+pixel coordinate and a flexure parameter, probably zenith distance.
+.le
+.ls 4 5. Linearization of spectra according to dispersion relation.
+Correction can be to either a linear or logarithmic dispersion in
+the pixel coordinate.
+.le
+
+Perhaps the most critical aspect of determining the dispersion
+relation is the algorithm for locating spectral line centers.
+A variety of techniques are available, and some testing will
+be required before adopting a standard scheme. Probably several
+algorithms will be available and switch selectable at the command
+level.
+
+.sh
+LINE LIST PREPARATION
+.ih
+NAME
+line_list -- Create a new line list, or modify an existing one
+.ih
+USAGE
+line_list [filename, option]
+.ih
+DESCRIPTION
+The line list specified by the IRAF filename parameter will be
+either created, listed, or modified according to the option
+given. The IRAF database facility will be used to manage the
+line list file.
+
+Each entry within the list will contain an identification tag (e.g. HeII)
+a reference value (e.g. wavelength, frequency, wavenumber), and a weighting
+value such as 1.0 or 2.0 to be used later in the least-squares fitting.
+An optional descriptive header may be associated with the line list.
+(e.g. "HeII arc from 3500 to 11,000A")
+
+Either the header, entry identifier or value may be changed
+if the modify option is specified. Deletion or addition of
+entries is also possible with the appropriate option flags
+specifications.
+.ih
+PARAMETERS
+
+.ls 4 filename
+The IRAF filename to be assigned to the line list. The list will
+referenced by this name thereafter.
+.le
+.ls 4 option
+This string parameter determines the action of the line list task.
+If no option is specified, the default action is to list the
+specified line list on the standard output if the line list
+exists; if it does not exist, a new line list will be created
+with the given name.
+.ls 4 = create
+The identifications and values for the line list are read from
+the standard input on a record by record basis. Each input
+record contains data for one line according to the format:
+.br
+.ls 4 identification value
+.le
+.le
+.ls 4 = header
+A descriptive header is read from the standard input.
+.le
+.ls 4 = list (default)
+The line list is listed on the standard output.
+.le
+.ls 4 = add
+Additional entries to the list are read from the standard input.
+.le
+.ls 4 = delete
+The entries defined by the values read from the standard input
+are deleted from the line list. The entries deleted will be those
+having values nearest the entered value, unless the absolute
+difference from the listed value is too large. For example, one
+can enter 5015 to delete the helium line at 5015.675, but entering
+5014 would result in an error message that no match could be found.
+.le
+.ls 4 = id
+The entries defined by values entered as for delete will be modified.
+Input is expected in the format:
+.br
+approxvalue newidentifier
+.le
+.ls 4 = value
+As for option = id except that the input format contains
+the newvalue instead of the newidentifier.
+.le
+.ls 4 = weight
+As for option = id except that the nput format contains the newweight
+instead of the newidentifier.
+.le
+.le
+
+.sh
+MANUAL LINE IDENTIFICATION
+
+This routine provides the option of manually identifying the locations
+of spectral features by either setting a graphics cursor interactively,
+or by entering a list of feature positions.
+
+The primary uses for this routine are to identify features of known
+wavelength in preparation for a dispersion solution, and also to
+identify features in linearized spectra for velocity measurements.
+
+.ih
+NAME
+mlinid -- Manually identify line features in a spectrum
+.ih
+USAGE
+mlinid [filename, files]
+.ih
+DESCRIPTION
+A list file is created for each of
+the spectra specified by the IRAF filename parameter and files string
+containing the locations of spectral features and their associated
+reference value (e.g. wavelength, frequency, wavenumber).
+If invoked as an interactive task from a graphics terminal,
+the spectra will be displayed and cursor input requested to ascertain
+the approximate position of the feature. An improved position will
+be obtained via one of the line centering algorithms, and
+a request will be made for the reference value of the feature.
+The requests continue until EOF is detected.
+The name of the created list file is added to the spectral image
+header.
+
+Positions of features are given in the coordinate system defined
+by the standard image header entries CRPIX and CDELT
+defining the reference pixel and the
+pixel to pixel distance. For raw spectra these values simply define
+the pixel position of the feature. For dispersion corrected spectra
+these values define the position of the feature in wavelength units.
+
+If invoked as a background task, or from a non-graphics terminal,
+additional requests for the cursor x-coordinate and intensity
+will be made from the standard input.
+
+The procedure is repeated for all specified spectra.
+
+Because the dispersion solution may be a function of an additional
+instrument dependent parameter (e.g. zenith distance),
+the driving package script can indicate the header entry to be
+used as the second parameter. Values for this parameter, if present,
+will be written to the output list file.
+.ih
+PARAMETERS
+
+.ls 4 filename
+See the definition of this parameter under RFITS.
+.le
+.ls 4 files
+See the definition of this parameter under RFITS.
+.le
+.ls 4 cur (x,y)
+This is a list structured parameter of type "graphics cursor".
+The list contains the approximate values of the pixel
+coordinate for the spectral features to be identified
+and the intensity value of the continuum at the feature. If the
+task is invoked from a graphics terminal in an interactive mode,
+values for this parameter will be read from the terminal's
+graphics cursor.
+.le
+.ls 4 value
+This is a list structured parameter containing the reference values
+for the spectral features to be identified. If the task is invoked in
+an interactive mode, the user will be prompted for these values.
+.le
+.ls 4 center_option
+This string parameter controls which algorithm is to be used during
+the improved centering phase of the process. (default = cg)
+.ls 4 = cg
+This specifies a center of gravity algorithm defined as the
+first moment of the intensity above the continuum level
+across the spectral feature.
+The integrals are evaluated using the trapezoidal rule and
+the intensity will be weighted by the square root of the intensity
+if the switch parameter cgweight is set to yes. The integral
+is evaluated from the approximate position defined by x cursor position
+plus and minus the number of pixels specified by the parameter
+cgextent.
+.ls 4 cgweight
+This switch defines whether a weighted moment is used in the
+center of gravity centering algorithm. (default = yes)
+.le
+.ls 4 cgextent
+This integer parameter defines the limits of the integrals in the
+center of gravity centering algorithm. The integral extends from
+the approximate position minus the extent to the approximate position
+plus the extent in units of pixels. (default = 5).
+.le
+.le
+.ls 4 = parabola
+This specifies that the centering algorithm is to be a parabolic
+fit to the central 3 pixels. The improved center is taken as the
+center of the parabola. The central 3 pixels are defined as the
+most extreme local pixel plus and minus one pixel. The most extreme
+local pixel is that pixel nearest the approximate center having the
+greatest deviation from the local average value of the spectrum. The
+extent of "local" is taken as plus and minus the parameter parextent.
+.ls 4 parextent
+This integer parameter defines the extent in units of pixels
+of the search for a local extreme pixel. (default = 3)
+.le
+.le
+.ls 4 = gauss
+(This algorithm will not be implemented in the initial system release.)
+This specifies that the centering algorithm is to be a Gaussian
+fit to the region near the approximate center. The fit is
+made to a region specified by the parameter gextent. Because
+this is a three parameter non-linear least-squares fit
+(center, width, peak intensity), it is likely to
+be slow. It may also produce poor results with noisy data
+although centering on high signal to noise data should be
+excellent.
+.ls 4 gextent
+This integer parameter specifies the extent in pixels of the Gaussian fit.
+It may be necessary to include a significant region of continuum.
+(default = 9)
+.le
+.le
+.ls 4 = none
+If this option is chosen, no improvement to the approximate center
+will be made. This may be useful for asymmetric and weak features
+where the other techniques can be systematically incorrect.
+.le
+.ls 4 second_order
+This string parameter defines the name of the image header entry to be
+used as the second order correction parameter in the dispersion
+solution. Values for this parameter, if present, are read from the image header
+and written to the output list file. Examples of values are zenith_distance,
+sidereal_time, instr_temp. (default = none)
+.le
+
+.sh
+AUTOMATIC LINE IDENTIFICATION
+
+This task allows a user to locate a set of spectral features defined
+in a previously prepared list.
+
+.ih
+NAME
+alinid -- Automatically locate spectral features in a spectrum
+.ih
+USAGE
+alinid [filename, files, mfilename, mfiles, list]
+.ih
+DESCRIPTION
+A list file is created for each of the spectra specified by the
+IRAF filename and files parameters. The file will contain
+the positions of the features defined in the line list file
+specified by the list parameter. The name of the list file
+will be added to the spectral image header.
+
+A preliminary estimate of the
+relationship of feature position as a function of feature
+wavelength is obtained from the list file(s) created by the
+task MLINID and defined by the parameters mfilename and mfiles.
+A single preliminary estimate may be applied to a number of
+spectra by specifying a null mfiles string. Otherwise,
+a one-to-one correspondence is assumed between preliminary
+list files and spectra. If the entry for mfilename is also null,
+the linear dispersion relation for the pixel coordinate contained
+in the image header will be used. This provides the option
+of locating features in linearized spectra.
+
+The initial position estimate is improved using one of the centering
+algorithms defined by the center_option parameter and then
+written to a list file. Also written to the list file will be
+the feature's reference value (e.g. wavelength), weight,
+identification string, and the acceptability of the line.
+Acceptibility is noted as either accepted, set, deleted, or not
+found (see below).
+
+If the task is invoked from a graphics terminal as an interactive
+task, the interact switches may be set to yes.
+Then each spectrum will
+be displayed in segments expanded about each feature with the
+automatically defined center marked. The user can then accept
+the given position, mark a new center, or declare the line
+unacceptable.
+
+If the display switch is set, the spectrum is displayed
+and the features marked.
+
+If the task is invoked as a background task, or if the
+user terminal is non-graphics, then the display and interact
+switches cannot assume values of yes.
+.ih
+PARAMETERS
+.ls 4 filename
+See the definition of this parameter under RFITS
+.le
+.ls 4 files
+See the definition of this parameter under RFITS
+.le
+.ls 4 mfilename
+The root for the spectra names used to define the preliminary
+relationship between spectral feature coordinate and reference
+value. The mfiles string parameter is used to define the
+suffix of the spectral name. If this parameter is null, the
+preliminary relationship is assumed to be linear and defined
+by the standard image header entries CRPIX and CDELT.
+.le
+.ls 4 mfiles
+This string parameter serves the same purpose for mfilename
+as the files parameter serves for filename. Note that if this
+parameter is null, the single spectrum defined by mfilename
+is used to define the preliminary relationship for all
+spectra defined by filename and files.
+.le
+.ls 4 list
+This parameter specifies the IRAF file name containing the
+spectral line list to be scanned for features. (See the
+task LINE_LIST)
+.le
+.ls 4 interact
+If this switch is set to yes and the task is invoked interactively
+from a graphics terminal, the spectrum will be displayed on the
+terminal. Each feature will be marked with its computed center
+and the user can type one of the following single keystrokes:
+.ls 4 a
+to accept the displayed position
+.le
+.ls 4 s
+to set the cursor to the desired position
+.le
+.ls 4 d
+to delete the displayed feature from the line list during this
+invocation of the task
+.le
+.ls 4 b
+to reset the operational mode to a "batch" environment where
+no display or interaction is desired
+.le
+.ls 4 p
+to reset the operational mode to a "passive" environment where
+the spectra are displayed and marked, but no interaction is desired
+.le
+.le
+.ls 4 display
+If this switch is set to yes, and the task is invoked from
+a graphics terminal, the spectrum will be displayed and the
+identified lines marked for the user's inspection. No
+interaction is allowed unless the interact switch is also set to yes.
+(default = yes)
+.le
+.ls 4 center_option
+See the description of this parameter under MLINID.
+.le
+.ls 4 second_order
+See the description of this parameter under MLINID.
+.le
+
+.sh
+DISPERSION SOLUTION
+
+After several spectral features have been identified, either
+manually with MLINID or automatically with ALINID, the relationship
+between feature reference value and pixel coordinate can be calculated.
+The dispersion relation may require a second order correction
+to account for variations as a function of some additional
+parameter, such as zenith distance or time of day.
+
+.ih
+NAME
+disp_sol -- Determine the dispersion relation for a set of spectra.
+.ih
+USAGE
+disp_sol [filename, files, order, global]
+.ih
+DESCRIPTION
+The list files containing the postions and reference values for
+features in the specified spectra are combined to solve for the
+dispersion relation by a polynomial least-squares fit to the lists.
+The solution can include a second order
+correction parameter which is also contained in the list file.
+
+The solution takes the form of a polynomial in the pixel
+coordinate having the specified order. The second order
+is also fit by a polynomial. (The choice of a polynomial
+applies to the initial release. Additional forms, selectable by
+parameter, of the solution may be available later.)
+The polynomial coefficients are stored in the spectral image header
+if the store_coeffs switch is set to yes and the spectrum does not already
+contain a solution. If a solution already exists, the user is
+asked for confirmation to overwrite the solution, unless the overwrite
+switch is set to yes.
+
+If filename is the name of a data group, all line list files for
+spectra in that data group are combined into the solution.
+
+If invoked as an interactive task from a graphics terminal,
+a representation of the solution will be displayed and the user
+will be allowed to alter the weights of the line entries.
+If invoked from a non-graphics terminal, the representation
+will be in a tabular format (also available at a graphics terminal)
+for inspection and alteration. If invoked as a background task,
+an attempt will be made to reject discrepant points.
+
+The solution is made using all available line lists combined
+into a single data set if the global switch is set to yes.
+If global is set to no, each spectrum is treated as an
+independent data set.
+.ih
+PARAMETERS
+.ls 4 filename
+See the definition of this parameter under RFITS.
+.le
+.ls 4 files
+See the definition of this parameter under RFITS.
+.le
+.ls 4 order
+The order of the polynomial for a least-squares fit to the
+dispersion solution. If the specified order exceeds the number
+of free parameters, the order will be reset to the maximum
+allowable. (default = 1 --> linear).
+.le
+.ls 4 global
+This switch determines if the data from all the specified spectra are
+to be treated as a single large data set. This is appropriate if the
+data represent a single congruent "setup". But if the data represent
+several different configurations, such as for multiaperture data,
+the global switch should be set to no. Note that if global is no, then
+no second order parameter solution is possible.
+.le
+.ls second_order
+This parameter specifies the order for the fit to the second
+order parameter. The limit described for the order parameter
+applies. (default = 0 --> no second parameter solution).
+.le
+.ls 4 interact
+If this switch is set to yes and the task is invoked interactively
+from a graphics terminal, the residuals of the solution will be displayed
+on the terminal. The user can type one of the following keystrokes:
+.ls 4 a
+to accept the current solution. The parameters of the fit
+are written into the headers of the spectra contributing to the fit.
+.le
+.ls 4 e
+to exit without saving the solution
+.le
+.ls 4 w
+to reset the weight of the point near the cursor positioned by the user.
+The user is then prompted for the new weight which may be set to zero
+to delete the point from the solution.
+.le
+.ls 4 t
+to display the solution parameters in tabular form
+.le
+.ls 4 o
+to specify a new order for the solution
+.le
+.ls 4 s
+to specify a new order for the second order parameter solution
+.le
+.ls 4 b
+to revert to batch mode to process the remainder of the solutions.
+This is only meaningful if the global switch is set to no.
+.le
+.ls 4 p
+to revert to passive mode as for ALINID. This is only meaningful
+if the global switch is set to no
+.le
+.le
+.ls 4 store_coeffs
+If this switch is set to yes, the dispersion solution polynomial
+coefficients will be written into the image header as special
+header elements. Otherwise, the solution is discarded. (default = yes)
+.le
+.ls 4 overwrite
+If this switch is set to yes, any existing dispersion solution contained
+in the image header will be overwritten without any request for confirmation
+from the user. If set to no, the user is asked if overwriting of the solution
+is acceptable. If no prior solution exists, this switch has no meaning.
+(default = no)
+.le
+
+.sh
+DISPERSION CORRECTION
+
+After the dispersion relation has been determined, the spectra
+are usually re-binned to create spectra having a linear
+relationship with wavelength. Although this is not always
+done, nor is it always desirable, subsequent processing
+is often simplified greatly by having to deal with only
+linearized data.
+
+.ih
+NAME
+disp_cor -- Linearize spectra having dispersion relation coefficients
+.ih
+USAGE
+disp_cor [filename, files, destination, option]
+.ih
+DESCRIPTION
+The spectra specified by the root filename and the files parameter
+are corrected for deviations from a linear wavelength relationship.
+The corrected spectra are written to filenames having the root
+specified by the destination parameter.
+
+The correction is performed by solving for the inverse relationship
+of pixel number as a function of equal increments in the wavelength.
+The new starting wavelength and increment are optionally specified
+by the parameters start and increment. If not specified, the current
+wavelength of the first pixel will be taken as the starting wavelength
+and the increment will be chosen to exactly fill the length of the
+current spectrum. The spectrum will be padded with INDEF on either
+end if the specified parameters request a larger spectral window than
+actually exists.
+
+The actual re-binning can be performed using one of several algorithms.
+The most efficient minimally smoothing algorithm to be available in the
+initial release is the fifth order polynomial interpolation.
+The most efficient count preserving algorithm is the simple partial-pixel
+summer.
+
+The interpolation can be either linear in wavelength or in the logarithm
+of wavelength. The latter is useful for subsequent radial velocity
+analyses. The choice is specified by the logarithm switch.
+.ih
+PARAMETERS
+.ls 4 filename
+See the definition of this parameter under RFITS.
+.le
+.ls 4 files
+See the definition of this parameter under RFITS
+.le
+.ls 4 destination
+See the definition of this parameter under COIN_COR.
+.le
+.ls 4 option
+This parameter specifies the algorithm to be used for the
+re-binning operation. The initial release will contain the
+following options:
+.ls 4 = linear
+to use a linear interpolation
+.le
+.ls 4 = poly
+to use a fifth order polynomial
+.le
+.ls 4 = sinc
+to use a sinc function interpolator
+.le
+.ls 4 = sum
+to use partial pixel summation
+.le
+.le
+.ls 4 start
+This parameter specifies the wavelength at which the corrected
+spectrum is to begin. The wavelength of the first pixel will
+be assigned this value. This parameter, combined with the increment
+parameter, allows data taken on different nights
+or with different instruments to be forced to be congruent.
+(default = value at first pixel)
+.le
+.ls 4 increment
+This parameter specifies the pixel to pixel wavelength (or logarithm of
+wavelength) increment
+that is to be used during the linearization process.
+(default = [wavelength at last pixel minus wavelength at first pixel]
+divided by [number of points in spectrum - 1])
+.le
+.ls 4 logarithm
+If this switch is set to yes, the linearization occurs with equal
+increments in the logarithm of wavelength. Otherwise, equal
+increments of wavelength are used. (default = no)
+.le
+.ls 4 print_header
+See the definition of this parameter for COIN_COR.
+.le
+
+.sh
+FIELD FLATTENING
+
+Most detectors exhibit variations in sensitivity across the field
+of interest. These are removed by dividing all observations by
+the spectrum of a smooth continuous source, such as an incandescant
+lamp. In order that these lamps, which usually have a low color
+temperature, produce sufficient energy in the blue and ultraviolet,
+they are often enclosed in a quartz rather than a glass bulb.
+Thus, the field flattening operation is often referred to as
+"quartz division".
+
+The operation is of marginal value unless the continuum source is
+observed properly. First, a very high signal-to-noise ratio per
+pixel is required. For certain detectors and applications this
+may not be possible in a reasonable amount of time. Second, the
+continuum source should not have any significant variations
+across small regions of the spectrum (high frequency "bumps").
+Otherwise the division will add these variations into the spectrum.
+
+There are basically two aspects to flat fielding spectra. The first
+is the removal of pixel-to-pixel sensitivity variations. The second
+is a more global pattern due to non-uniform iillumination and
+spatial and wavelength sensitivity variations across the detector.
+
+The very high frequency pixel-to-pixel variations are easily handled
+by a straightforward division of the observations by the continuum
+spectrum.
+
+The second problem is usually postponed in one-dimensional data
+reductions and included in the
+solution for the system sensitivity by observing standard stars.
+This aspect of the problem is adequately handled in this fashion
+and no special operators are provided in this package.
+
+If the continuum source exhibits large low frequency variations
+across the spectrum, it may be desirable to filter these.
+This is most easily done by fitting a moderately high order
+polynomial through the spectrum, and then dividing the polynomial
+representation into the original continuum spectrum. The result
+is a flat spectrum having an average value of unity and
+containing only the pixel-to-pixel sensitivity variations.
+
+Finally, it should be noted that the field flattening operation
+is most properly performed prior to the wavelength linearization
+of the spectra because the linearization process can smooth
+pixel-to-pixel variations.
+
+Flat fielding consists of two logical operations. The first
+is the solution for a continuum spectrum with the low frequency
+variations removed (CR_FLAT). It is assumed that multiple observations
+of the continuum source have been already averaged (using the
+image calculator program, for example).
+
+The second operation is the actual field flattening of the object
+spectra (FLT_FIELD).
+
+.ih
+NAME
+cr_flat -- Create a flat field spectrum
+.ih
+USAGE
+cr_flat [filename, destination]
+.ih
+DESCRIPTION
+The continuum spectrum specified by filename is corrected for
+low frequency spectral variations. Several algorithms may be
+available. The initial release will contain only a polynomial
+fitting technique. A fourier filtering algorithm may be added
+at a later date.
+
+The spectrum is fit by a polynomial in the pixel coordinate
+and the polynomial is divided into the original spectrum.
+Discrepant pixels may be rejected and the solution re-iterated.
+
+If invoked as an interactive task from a graphics terminal, the
+resultant spectrum is displayed and the user may alter the
+solution parameters if the interact switch is set to yes.
+If invoked from a non-graphics terminal, sufficient information
+concerning the fit is written to the terminal to allow
+the user to judge the quality of the fit and then alter the
+solution parameters.
+
+If invoked as a background task, or if the interact switch is set
+to no, default parameters will be assumed.
+
+The parameters of the fit are added to the image header for
+the corrected spectra.
+.ih
+PARAMETERS
+.ls 4 filename
+The IRAF filename containing the spectrum of the continuum
+source. If this is a data group name, then all spectra
+in the group will be corrected.
+.le
+.ls 4 destination
+The IRAF filename into which the resultant corrected
+spectrum is written. If the source filename is a data group,
+then the destination will be a new data group containing
+the names of the corrected spectra. The names will be
+assigned using the destination as a root name, and the
+ordinal of the spectrum in the list as a suffix.
+.le
+.ls 4 option
+This string parameter specifies the algorithm to be used
+in the correction process. Currently only option = poly
+is recognized.
+.le
+.ls 4 order
+This integer parameter specifies the initial order of the
+polynomial fit. (default = 8)
+.le
+.ls 4 reject
+This parameter specifies the number of standard deviations
+beyond which pixels are to be rejected. If the task
+is interactive, pixel rejection is performed only on command.
+If invoked as a background task, rejection is iterated
+until no further pixels are rejected, or until the iteration
+count has been attained (see parameter niter). (default = 2.2)
+.le
+.ls 4 niter
+This integer parameter specifies the number of iterations
+to be performed in background mode. It may be set to 0 to
+specify no pixel rejection. (default = 2).
+.le
+.ls 4 interact
+If this switch is set to yes and the task is invoked as
+an interactive task, the user can alter the fit parameters
+order, reject, and niter. If at a graphics terminal, the resultant
+spectrum is displayed and the user can command the operation
+with the following single keystrokes:
+.ls 4 a
+to accept the solution
+.le
+.ls 4 o
+to change the order of the fit
+.le
+.ls 4 r
+to reset the reject parameter
+.le
+.ls 4 n
+to reset the niter parameter
+.le
+.ls 4 b
+to reset the operational mode to a batch environment
+.le
+.ls 4 p
+to reset the operational mode to a passive environment
+.le
+.le
+
+If at a non-graphics terminal, the fit parameters are
+written to the terminal so that the user may assess the quality
+of the fit. A request for one of the interactive commands
+is then issued and the user may proceed as if on a graphics
+terminal.
+.le
+
+.ih
+NAME
+flt_field -- Correct spectra for pixel-to-pixel variations
+.ih
+USAGE
+flt_field [filename, files, flatname, destination]
+.ih
+DESCRIPTION
+The spectra specified by the IRAF filename parameter and the files
+string are divided by the flat field spectra specified by
+the parameter flatname. If filename and flatname are data group names,
+the division is performed on a one-for-one basis.
+
+This operation is little more than a simple division. An image
+header entry is added indicating that flattening by the
+appropriate spectrum has been performed.
+.ih
+PARAMETERS
+.ls 4 filename
+See the definition of this parameter under RFITS.
+.le
+.ls 4 files
+See the definition of this parameter under RFITS.
+.le
+.ls 4 flatname
+This string parameter sepcifies the name of the flat field
+spectrum, or spectra if a data group name.
+It is not necessary that the flat field spectra be corrected
+for low frequency spectral variations.
+It is required that the spectra be congruent with the spectra
+to be flattened; that is, all spectra must have the same
+length, reference pixel, and pixel-to-pixel increment.
+.le
+.ls 4 destination
+See the definition of this parameter under COIN_COR.
+.le
+.ls 4 print_header
+See the definition of this parameter under COIN_COR.
+.le
+
+.sh
+EXTINCTION CORRECTION/FLUX CALIBRATION
+
+At each wavelength (lambda) along the spectrum, the observed
+flux (fobs) must be corrected for extinction (k) due to the
+Earth's atmosphere and the system sensitivity (S) to obtain
+a true flux (f) above the atmosphere.
+.sp 1
+fobs(lambda) = f(lambda) * exp{-z[k(lambda)+C]} * S(lambda)
+.sp 1
+where z is the path through the Earth's atmosphere during the
+observation and C is an optional "grey" opacity term.
+
+For most observations, the standard extinction function is adequate,
+but occasionally the additive term is beneficial. In rare cases,
+the observer has acquired sufficient high quality data to
+determine the extinction function across the spectral region
+of interest. And in other cases, the user may have a priori
+knowledge of the extinction function.
+
+Observations of standard stars are used to determine
+either the additive constant or a new extinction function,
+and the system sensitivity.
+The two operations, determining the extinction parameters
+and the system sensitivity curve, are therefore intimately
+related.
+
+The process breaks down into four basic operations:
+.ls 4 1.
+Define the standard stars and their observations. (STD_STAR)
+.le
+.ls 4 2.
+Define the extinction solution option and solve for the extinction
+additive term or complete function if necessary. (CREXT_FUNC)
+.le
+.ls 4 3.
+Determine the system sensitivity function. (CRSENS_FUNC)
+.le
+.ls 4 4.
+Remove the effects of the extinction and the system sensitivity
+from the observations. (EXT_COR, SENS_COR)
+.le
+
+These will be described below in more detail.
+
+.ih
+NAME
+std_star -- Define the standard stars to be used for solving the extinction and
+system sensitivity functions.
+.ih
+USAGE
+std_star [fnamelist, filelist, namelist, std_file]
+.ih
+DESCRIPTION
+The spectra defined by the list of filenames and associated files
+contained in the string list parameters fnamelist and filelist
+are compared with the standard flux measurements for the stars
+listed in the string list parameter namelist. The resultant
+table of ratios as a function of wavelength are saved in the
+IRAF file specified by the std_file parameter.
+
+All spectra must be wavelength linearized. The star names given
+in namelist must be in a form similar to that in the IIDS Reduction
+manual. If a star name cannot be matched to the standards contained
+in a calibration file, the user is prompted for additional
+information. The calibration file containing the list of reference
+flux values is specified by the calib_file parameter.
+.ih
+PARAMETERS
+.ls 4 fnamelist
+This is a list structured parameter containing the IRAF filenames
+associated with the spectra for each of the standard stars contained
+in the list of starnames defined by the list structured parameter
+namelist. Both these parameters must have the same number of elements.
+The filename specifications are defined as in RFITS.
+.le
+.ls 4 fileslist
+This is also a list structured parameter having the same number of
+elements as fnamelist although some may be null.
+The entries are defined as in RFITS.
+.le
+.ls 4 namelist
+This is also a list structured parameter having the same number
+of elements as fnamelist. All elements must exist and have a
+form to be decided on, but probably similar to that given in the IIDS
+Reduction manual, page 36. For example, a typical star name might
+be BD+8 2015, or HILTNER 102. Case will not be significant.
+.le
+.ls 4 std_file
+This string parameter defines the IRAF filename in which the
+results from the standard star observations are stored.
+This file will be used to contain further calibration information
+such as the extinction and sensitivity function for the
+current set of observations.
+.le
+.ls 4 calib_file
+This string parameter defines which of several calibration
+data files are to be accessed for the comparison of the
+observational data to the standard fluxes. Separate tools
+to examine, modify, and create these files are available
+in the utilities package. (default = onedspec$iids.cal)
+.le
+.ls 4 print_header
+If this parameter is set to yes, an informative header
+is listed on the standard output as the standard stars are processed
+(default = yes).
+.le
+
+.ih
+NAME
+crext_func -- Create an extinction function from a set of observations
+.ih
+USAGE
+crext_func [std_file, option]
+.ih
+DESCRIPTION
+The user may specify via the option parameter which of the four
+extinction solutions is to be used. These are:
+.sp 1
+.ls 4 1.
+Adopt standard extinction function (option = standard).
+.le
+.ls 4 2.
+Solve for an additive constant (option = additive).
+.le
+.ls 4 3.
+Solve for extinction function (option = new_function).
+.le
+.ls 4 4.
+Input a tabular extinction function consisting of extinction
+values at specified wavelengths (option = input).
+.le
+.sp 1
+If the first or last options are chosen, the std_file may be empty.
+If the second option is chosen, several observations at
+differing air masses must be included in the file specified by std_file.
+If the third option is chosen,
+at least two standard stars must be included in the list of observations.
+
+The derived extinction function is added to the IRAF file specified
+by the std_file parameter by creating a new spectrum containing the
+function and adding the spectrum name to the std_file.
+The new spectrum will adopt a name having a root from the
+name std_file and a suffix of ".ext". The spectrum is created by
+a spline interpolation through the extinction values.
+
+If invoked as an interactive task from a graphics terminal, the
+derived extinction function is displayed. The user may interactively
+alter the derived extinction values using the graphics cursor.
+If invoked from a non-graphics terminal, the user may alter the
+values by specifying the wavelength and new extinction value
+from the standard input. Interaction may be suppressed by setting the
+interact switch to no.
+
+.ih
+PARAMETERS
+.ls 4 std_file
+See the definition of this parameter under STD_STAR.
+.le
+.ls 4 option
+This parameter specifies which aspects of the extinction solution
+are to be computed. See description section for CREXT_FUNC.
+.le
+.ls 4 interact
+If this switch is set the user may alter the derived extinction values.
+If invoked from a graphics terminal and interact is set to yes, the
+following single keystroke commands may be typed:
+.ls 4 a
+to accept the current solution
+.le
+.ls 4 m
+to modify the extinction value at the cursor wavelength position (cursor-x)
+to the cursor extinction value position (cursor-y).
+.le
+.ls 4 i
+to insert a new wavelength-extinction value pair at the current
+crosshair position.
+.le
+.ls 4 d
+to delete the wavelength-extinction value pair at the current
+cursor wavelength position.
+.le
+.le
+
+.ih
+NAME
+crsens_func -- Create system sensitivity function.
+.ih
+USAGE
+crsens_func [std_file, option]
+.ih
+DESCRIPTION
+The standard star data and extinction function contained in the
+IRAF file specified by the std_file parameter are used to
+compute the system sensitivity as a function of wavelength.
+The derived function is written to the file specified by
+std_file.
+
+There must be at least one standard star observation contained
+in the std_file, unless the parameter option = input.
+This allows the user to enter any function in the
+form of wavelength-sensitivity pairs.
+
+If option = shift, a "grey" shift is applied to all observations
+necessary to bring relatively faint values up to the brightest
+to account for possible cloud variations.
+
+If invoked as an interactive task from a graphics terminal,
+and the interact switch is set to yes, the sensitivity values
+from each standard are plotted with any "grey" shift correction
+added. The user may delete or add new points as desired using
+the cursor. If invoked from a non-graphics terminal, a tabular
+list of the solution is presented and additions or deletions
+may be entered through the standard input.
+
+The final function written to the std_file is simply the name of a new
+spectrum derived from a spline fit to the sensitivity
+if the spline switch is set to yes. If spline = no, a linear
+interpolation between sensitivity points will be used.
+The sensitivity spectrum name will be taken from the file name
+given to std_file and with the suffix ".sen".
+.ih
+PARAMETERS
+.ls 4 std_file
+See the definition of this parameter under STD_STAR.
+.le
+.ls 4 option
+This parameter can assume the following string values:
+.ls 4 = input
+to indicate that the sensitivity function is to be entered as
+wavelength-sensitivity pairs.
+.le
+.ls 4 = shift
+to force a "grey" shift between all standard star spectra to
+account for clouds. This is actually a multiplicative factor
+across each of the affected spectra.
+.le
+.le
+.ls 4 spline
+This switch parameter determines if a spline fit is to be made
+between the sensitivity points (spline = yes), or a linear
+fit (spline = no). (default = yes).
+.le
+.ls 4 interact
+If invoked as an interactive task, the user may alter the sensitivity
+function values. If at a graphics terminal, the sensitivity curve
+is displayed first for each star in the solution. The user may
+add or delete values for any or all stars at a given wavelength.
+Subsequently, the derived average curve is displayed and the user
+may further modify the solution. The following keystrokes are
+available from the graphics terminal:
+.ls 4 a
+to accept the current displayed data (solution).
+.le
+.ls 4 d
+to delete the value at the cross-hairs. If several values
+are very close together, an expanded display is presented.
+.le
+.ls 4 i
+to insert the sensitivity value of the y-cursor at the wavelength position.
+.le
+.ls 4 c
+to "create" new sensitivity values at the wavelength position of the
+x-cursor. Normally sensitivity values are computed only at pre-defined
+wavelengths specified in the calib_file. Additional values
+may be computed by interpolation of the standard star fluxes
+from the calib_file. The name of the calib_file and the spectra
+in the current solution are taken from the std_file.
+.le
+.le
+
+.ih
+NAME
+ext_cor -- Extinction correct specified spectra
+.ih
+USAGE
+ext_cor [filename, files, std_file, destination]
+.ih
+DESCRIPTION
+The spectra specified by the filename and files parameters
+are corrected for atmospheric extinction according to the
+extinction correction function pointed to by the function
+name in std_file. The resulting new spectra are created with the
+root of the destination parameter and having suffixes of
+1 through n corresponding to the n spectra corrected.
+If filename is a data group name, a new data group will be created having
+the name given by the destination parameter.
+
+The correction has the form:
+.sp 1
+f(lambda) = fobs(lambda) / 10**{-z[a(lambda) + C]}
+.sp 1
+where:
+.ls 4 f(lambda) = the flux at wavelength lambda above the Earth's atmosphere.
+.le
+.ls 4 fobs(lambda) = the flux observed through the atmosphere
+.le
+.ls 4 z = the path length through the atmosphere is units of air masses
+(= 1 at the zenith)
+.le
+.ls 4 a(lambda) = the extinction function at wavelength lambda
+in magnitudes per airmass.
+.le
+.ls 4 C = the additive constant, if any, in magnitudes per airmass.
+.le
+.sp 1
+For each spectrum, the zenith distance must be present in the image header.
+This is assumed to be correct for the beginning of the observation.
+For short exposures, this is adequate for the correction, but for
+long exposures, an effective air mass must be calculated over the
+integration. To do so requires knowledge of the altitude and azimuth
+of the telescope (or equivalantly RA, Dec, and sidereal time).
+If these are not present, the approximate air mass calculation will be used
+based solely on the available zenith distance. If the zenith distance
+is not present, user input is requested.
+
+The air mass is calculated according to the following equation for a given
+telescope position (based on Allen p.125,133):
+.sp 1
+z = sqrt{[q sin (alt)]**2 + 2q + 1} - q sin(alt)
+.sp 1
+where:
+.ls 4 q
+= atmospheric scale height (approx = 750).
+.le
+.ls 4 alt
+= telescope altitude
+.le
+.sp 1
+If the telescope traverses a significant distance in elevation during
+the integration, an effective correction can be computed as:
+.sp 1
+f(lambda)a = f(lambda)obs*T / integral{10**[-z(t)(a(lambda) + c)]}dt
+.sp 1
+where the integral is over the integration time, T.
+
+This expression can then be evaluated numerically at each wavelength.
+Because this is a time-consuming operation, the switch effective_cor
+can be set to no and then a simplified correction scheme will be used.
+This will be to compute a midpoint airmass if sufficient information
+is available, or simply to use the header airmass otherwise.
+.ih
+PARAMETERS
+.ls 4 filename
+See the definition of this parameter under RFITS.
+.le
+.ls 4 files
+See the definition of this parameter under RFITS.
+.le
+.ls 4 std_file
+See the definition of this parameter under STD_STAR.
+.le
+.ls 4 destination
+See the definition of this parameter under COIN_COR.
+.le
+.ls 4 effective_cor
+If this switch is set to yes, the procedure to compute an effective
+corrective term averaged over the integration time will be used.
+Although a slow process, this method is more accurate than
+simply using the correction at any given time of the integration
+such as the midpoint. If set to no, a midpoint zenith distance
+will be computed and used if sufficient header information
+exists. (default = no).
+.le
+.ls 4 print_header
+See the definition of this parameter for COIN_COR.
+.le
+
+.ih
+NAME
+sens_cor -- Correct the specified spectra for system sensitivity
+variations across the spectrum.
+.ih
+USAGE
+sens_cor [filename, files, std_file, destination]
+.ih
+DESCRIPTION
+The spectra specified by the filename and files parameters are
+corrected for instrumental sensitivity by the
+function pointed to by the spectrum name contained in std_file.
+The resulting spectra are stored according to the destination parameter.
+Filename may be a data group name. If so, then destination will be
+a new data group containing the names of the corrected spectra.
+
+This correction is a simple vector multiplcation.
+.ih
+PARAMETERS
+.ls 4 filename
+See the definition of this parameter under RFITS.
+.le
+.ls 4 files
+See the definition of this parameter under RFITS.
+.le
+.ls 4 std_file
+See the definition of this parameter under STD_STAR.
+.le
+.ls 4 destination
+See the definition of this parameter under COIN_COR.
+.le
+.ls 4 print_header
+See the definition of this parameter under COIN_COR.
+.le
+.endhelp
diff --git a/noao/onedspec/doc/sys/Review.hlp b/noao/onedspec/doc/sys/Review.hlp
new file mode 100644
index 00000000..5139f630
--- /dev/null
+++ b/noao/onedspec/doc/sys/Review.hlp
@@ -0,0 +1,512 @@
+.help onedspec Sep84 "Spectral Reductions"
+.ce
+\fBOne Dimensional Spectral Reductions\fR
+.ce
+Analysis and Discussion
+.ce
+September 4, 1984
+.sp 3
+.nh
+Introduction
+
+ The \fBonedspec\fR package is a collection of programs for the reduction
+and analysis of one dimensional spectral data. The more general problem of
+operations upon one dimensional images or vectors shall be dealt with elsewhere,
+primarily in the \fBimages\fR and \fBplot\fR packages. The problems of getting
+data in and out of the system are handled by the \fBdataio\fR package, at least
+for the standard data formats such as FITS.
+
+The operators provided in \fBonedspec\fR shall be general purpose and, as far
+as possible, independent of the instrument which produced the data. Instrument
+dependent reductions tailored for specific instruments will be implemented as
+subpackages of the \fBimred\fR (image reductions) package. For example,
+the subpackages \fBiids\fR and \fBirs\fR will be provided in \fBimred\fR for
+reducing data from the KPNO instruments of the same name. The \fBimred\fR
+packages shall call upon the basic operators in \fBonedspec\fR, \fBimages\fR,
+and other packages to reduce the data for a specific instrument.
+
+
+.ks
+.nf
+ iids(etc)
+ imred
+ imredtools
+ onedspec
+ plot
+ tv
+ dataio
+ images
+ dbms
+ lists
+ system
+ language
+
+.fi
+.ce
+Relationship of \fBOnedspec\fR to other IRAF Packages
+.ke
+
+
+The relationship of the \fBonedspec\fR packages to other related packages in
+the IRAF system is shown above. A program (CL script) in a package at one
+level in the hierarchy may only call programs in packages at lower levels.
+The system will load packages as necessary if not already loaded by the
+user. The user is expected to be familiar with the standard system packages.
+
+.nh
+Basic Functions Required for One-Dimensional Spectral Reductions
+
+ The following classes of functions have been identified (in the preliminary
+specifications document for \fBonedspec\fR) as necessary to perform basic one
+dimensional spectral reductions. Only a fraction of the functionality
+required is specific to the reduction of spectral data and is therefore
+provided by the \fBonedspec\fR package itself.
+
+.ls Transport
+Provided by the \fBdataio\fR package, although we do not currently have a
+reader for REDUCER format data tapes. Readers for all standard format
+tapes are either available or planned.
+.le
+.ls Mathematical
+Standard system functions provided by \fBimages\fR (arithmetic, forward and
+inverse FFT, filtering, etc.).
+.le
+.ls Reduction Operators
+The heart of \fBonedspec\fR. Operators are required (at a minimum) for
+coincidence correction, dispersion determination and correction, flat
+fielding, sky subtraction, extinction correction, and flux calibration.
+Operators for flat fielding and sky subtraction are already available elsewhere
+in IRAF. Basic continuum fitting and subtraction is possible with existing
+software but additional algorithms designed for spectral data are desirable.
+.le
+.ls Plotting
+Standard system functions provided by the \fBplot\fR package.
+.le
+.ls Utilities
+Standard system functions provided by the \fBdbms\fR package.
+.le
+.ls Artificial Spectra
+These functions belong in the \fBartdata\fR package, but it is expected that
+prototype operators will be built as part of the initial \fBonedspec\fR
+development.
+.le
+
+.nh
+Data Structures
+
+ Spectra will be stored as one or two dimensional IRAF images embedded in
+database format files. A free format header is associated with each image.
+Spectra may be grouped together as lines of a two dimensional image provided
+all can share the same header, but more commonly each image will contain a
+single spectrum. The second image dimension, if used, will contain vectors
+directly associated with the images, such as a signal to noise vector.
+If the image is two dimensional the spectrum must be the first image line.
+The database facilities will allow images to be grouped together in a single
+file if desired.
+
+While most or all \fBonedspec\fR operators will expect a one dimensional
+image as input, image sections may be used to operate on vector subsections
+of higher dimensioned images if desired. The datatype of an image is
+arbitrary, but all pixel data will be single precision real within
+\fBonedspec\fR. While the IRAF image format does not impose any restrictions on
+the size of an image or image line, not all spectral operators may be usable
+on very large images. In general, pointwise and local operations may easily
+be performed on images of any size with modest memory requirements, and
+most of the \fBonedspec\fR operations appear to fall into this class.
+
+.nh 2
+The IRAF Database Faciltities
+
+ An understanding of the IRAF database facilities is necessary to visualize
+how data will be treated by operators in \fBonedspec\fR and other packages.
+The database facilities will be used not just for image storage but also for
+program intercommunication, program output, and the storage of large
+astronomical catalogs (e.g. the SAO catalog). Access to both small and
+large databases will be quite efficient; achieving this requires little
+innovation since database technology is already highly developed. We begin by
+defining some important terms.
+
+.ls
+.ls DBIO
+The database i/o package, used by compiled programs to access a database.
+.le
+.ls DBMS
+The database management package, a CL level package used by the user to
+inspect, analyze, and manipulate the contents of a database.
+.le
+.ls database
+A set of one or more "relations" or tables (DBIO is a conventional relational
+database). A convenient way to think of an IRAF database is as a directory.
+The relations appear as distinct files in the directory.
+.le
+.ls relation
+A relation is a set of \fBrecords\fR. Each record consists of a set of
+\fBfields\fR, each characterized by a name and a datatype. All the records
+in a relation have the same set of fields. Perhaps the easiest way to
+visualize a relation is as a \fBtable\fR. The rows and columns of the table
+correspond to the records and fields of the relation.
+.le
+.ls field
+A field of a record is characterized by an alphanumeric name, datatype, and
+size. Fields may be one dimensional arrays of variable size. Fields may be
+added to a relation dynamically at run time. When a new field is added to
+a relation it is added to all records in the relation, but the value of the
+field in a particular record is undefined (and consumes no storage) until
+explicitly written into.
+.le
+.ls key
+.br
+A function of the values of one or more fields, used to select a subset of
+rows from a table. Technically, a valid key will permit selection of any
+single row from a table, but we often use the term is a less strict sense.
+.le
+.le
+
+
+An \fBimage\fR appears in the database as a record. The record is really
+just the image header; the pixels are stored external to the database in a
+separate file, storing only the name of the pixel storage file in the record
+itself (for very small images we are considering storing the pixels directly
+in the database file). Note that the record is a simple flat structure;
+this simple structure places restrictions on the complexity of objects which
+can be stored in the database.
+
+The records in a relation form a set, not an array. Records are referred to
+by a user-defined key. A simple key might be a single field containing a
+unique number (like an array index), or a unique name. More complex keys
+might involve pattern matching over one or more fields, selection of records
+with fields within a certain range of values, and so on.
+
+From the viewpoint of \fBonedspec\fR, a relation can be considered a
+\fBdata group\fR, consisting of a set of \fBspectra\fR.
+
+.nh 2
+Image Templates
+
+ The user specifies the set of spectra to be operated upon by means of an
+image template. Image templates are much like the filename templates commonly
+used in operating systems. The most simple template is the filename of
+a single data group; this template matches all spectra in the group. If there
+is only one spectrum in a file, then only one spectrum is operated upon.
+A slightly more complex template is a list of filenames of data groups.
+More complex templates will permit use of expressions referencing the values
+of specific fields to select a subset of the spectra in a group. The syntax
+of such expressions has not yet been defined (examples are given below
+nonetheless), but the function performed by an image template will be the same
+regardless of the syntax. In all cases the image template will be a single
+string valued parameter at the CL level.
+
+.nh 2
+Standard Calling Sequence
+
+ The standard calling sequence for a unary image operator is shown below
+The calling sequence for a binary operator would be the same with a second input
+parameter added as the second argument. In general, any data dependent
+control parameters should be implemented as positional arguments following
+the primary operands, and data independent or optional (rarely used) parameters
+should be implemented as hidden parameters.
+
+
+.ks
+.nf
+ imop (input, output, data_dependent_control_params)
+
+ imop image operator name
+ input image template specifying set of input images
+ output filename of output datagroup
+
+ data_dependent_control_parameters
+ (hidden parameters)
+
+for example,
+
+ coincor (spectra, newgroup, dead_time)
+.fi
+.ke
+
+
+If a series of spectra are to be processed it seems reasonable to add the
+processed spectra to a new or existing data group (possibly the same as an
+input datagroup). If the operation is to be performed in place a special
+notation (e.g. the null string) can be given as the output filename.
+At the \fBonedspec\fR level output filenames will not be defaulted.
+
+.nh 2
+Examples
+
+ Some examples of image templates might be useful to give a more concrete
+idea of the functionality which will be available. Bear in mind that what we
+are describing here is really the usage of one of the fundamental IRAF system
+interfaces, the DBMS database management subsystem, albeit from the point of
+view of \fBonedspec\fR. The same facilities will be available in any program
+which operates upon images, and in some non-image applications as well (e.g.
+the new \fBfinder\fR). Our philosopy, as always, is to make standard usage
+simple, with considerable sophistication available for those with time to
+learn more about the system.
+
+The simplest case occurs when there is one spectrum per data group (file).
+For example, assuming that the file "a" contains a single spectrum, the
+command
+
+ cl> coincor a, b, .2
+
+would perform coincidence correction for spectrum A, placing the result in
+B, using a dead time parameter of .2. For a more complex example, consider
+the following command:
+
+ cl> coincor "a.type=obj&coincor=no,b", a, .2
+
+This would perform coincidence correction for all spectra in group B plus all
+object spectra in group A which have not already been coincidence corrected,
+adding the corrected spectra to group A (notation approximate only). If the
+user does not trust the database explicit record numbers may be used and
+referenced via range list expressions, e.g.,
+
+ cl> coincor "a.recnum=(1,5,7:11),b", a, .2
+
+would select records 1, 5, and 7 through 11 from data group A. Alternatively
+the database utilities could be used to list the spectra matching the selection
+criteria prior to the operation if desired. For example,
+
+ cl> db.select "a.type=obj"
+
+would write a table on the standard output (the terminal) wherein each spectrum
+in data group A is shown as a row of field values. If one wanted to generate
+an explicit list of records to be processed with help from the database
+utilities, a set of records could be selected from a data group and selected
+fields from each record written into a text file:
+
+ cl> db.select "a.type=obj", "recnum, history" > reclistfile
+
+The output file "reclistfile" produced by this command would contain the
+fields "recnum" (record number) and "history" (description of processing
+performed to generate the record). The editor could be used to delete
+unwanted records, producing a list of record numbers suitable for use as
+an image template:
+
+ cl> coincor "a.recnum=@reclistfile", a, .2
+
+.nh
+Reduction Operators
+
+.nh 2
+Line List Preparation
+
+ I suggest maintaining the line lists as text files so that the user can
+edit them with the text editor, or process them with the \fBlists\fR operators.
+A master line list might be maintained in a database and the DBMS \fBselect\fR
+operator used to extract ASCII linelists in the wavelength region of interest,
+but this would only be necessary if the linelist is quite large or if a linelist
+record contains many fields. I don't think we need the \fBline_list\fR task.
+
+.nh 2
+Dispersion Solution
+
+ The problem with selecting a line list and doing the dispersion solution
+in separate operations is that the dispersion solution is invaluable as an aid
+for identifying lines and for rejecting lines. Having a routine which merely
+tweaks up the positions of lines in an existing lineset (e.g., \fBalinid\fR)
+is not all that useful. I would like to suggest the following alternate
+procedure for performing the dispersion solution for a set of calibration
+spectra which have roughly the same dispersion.
+
+.ls
+.ls [1] Generate Lineset [and fit dispersion]
+.sp
+Interactively determine the lineset to be used, i.e., wavelength (or whatever)
+and approximate line position in pixel units for N lines. Input is one or more
+comparison spectra and optionally a list of candidate lines in the region
+of interest. Output is the order for the dispersion curve and a linelist of
+the following (basic) form:
+
+ L# X Wavelength [Weight]
+
+It would be very useful if the program, given a rough guess at the dispersion,
+could match the standard linelist with the spectra and attempt to automatically
+identify the lines thus detected. The user would then interactively edit the
+resultant line set using plots of the fitted dispersion curve to reject
+misidentified or blended lines and to adjust weights until a final lineset
+is produced.
+.le
+
+.ls [2] Fit Dispersion
+.sp
+Given the order and functional type of the curve to be fitted and a lineset
+determined in step [1] (or a lineset produced some any other means, e.g. with
+the editor), for each spectrum in the input data group tweak the center of
+each line in the lineset via an automatic centering algorithm, fit the
+dispersion curve, and save the coefficients of the fitted curve in the
+image header. The approximate line positions would be used to find and measure
+the positions of the actual lines, and the dispersion curve would be fitted and
+saved in the image header of each calibration spectrum.
+
+While this operator would be intended to be used noninteractively, the default
+textual and graphics output devices could be the terminal. To use the program
+in batch mode the user would redirect both the standard output and the graphics
+output (if any), e.g.,
+
+.nf
+ cl> dispsol "night1.type=comp", linelistfile, order,
+ >>> device=stdplot, > dispsol.spool &
+.fi
+
+Line shifts, correlation functions, statistical errors, the computed residuals
+in the fitted dispersion curves, plots of various terms of the dispersion
+curves, etc. may be generated to provide a means for later checking for
+erroneous solutions to the individual spectra. There is considerable room for
+innovation in this area.
+.le
+
+.ls [3] Second Order Correction
+.sp
+If it is desired to interpolate the dispersion curve in some additional
+dimension such as time or hour angle, fit the individual dispersion solutions
+produced by [1] or [2] as a group to one or more additional dimensions,
+generating a dispersion solution of one, two or more dimensions as output.
+If the output is another one dimensional dispersion solution, the input
+solutions are simply averaged with optional weights. This "second order"
+correction to a group of dispersion solutions is probably best performed by
+a separate program, rather than building it into \fBalineid\fR, \fBdispsol\fR,
+etc. This makes the other programs simpler and makes it possible to exclude
+spectra from the higher dimensional fit without repeating the dispersion
+solutions.
+.le
+.le
+
+If the batch run [2] fails for selected spectra the dispersion solution for
+those spectra can be repeated interactively with operator [1].
+The curve fitting package should be used to fit the dispersion curve (we can
+extend the package to support \fBonedspec\fR if necessary).
+
+.nh 2
+Dispersion Correction
+
+ This function of this procedure is to change the dispersion of a
+spectrum or group of spectra from one functional form to another.
+At a mimimum it must be possible to produce spectra linear in wavelength or
+log wavelength (as specified), but it might also be useful to be able
+to match the dispersion of a spectrum to that of a second spectrum, e.g., to
+minimize the amount of interpolation required to register spectra, or
+to introduce a nonlinear dispersion for testing purposes. This might be
+implemented at the CL parameter level by having a string parameter which
+takes on the values "linear" (default), "log", or the name of a record
+defining the dispersion solution to be matched.
+
+It should be possible for the output spectrum to be a different size than
+the input spectrum, e.g., since we are already interpolating the data,
+it might be nice to produce an output spectrum of length 2**N if fourier
+analysis is to be performed subsequently. It should be possible to
+extract only a portion of a spectrum (perform subraster extraction) in the
+process of correcting the dispersion, producing an output spectrum of a
+user-definable size. It should be possible for an output pixel to lie at
+a point outside the bounds of the input spectrum, setting the value of the
+output pixel to INDEF or to an artificially generated value. Note that
+this kind of generality can be implemented at the \fBonedspec\fR level
+without compromising the simplicity of dispersion correction for a particular
+instrument at the \fBimred\fR level.
+
+.nh 3
+Line Centering Algorithms
+
+ For most data, the best algorithm in the set described is probably the
+parabola algorithm. To reject nearby lines and avoid degradation of the
+signal to noise the centering should be performed within a small aperture,
+but the aperture should be allowed to move several pixels in either direction
+to find the peak of the line.
+
+The parabola algorithm described has these features,
+but as described it finds the extrema within a window about the
+initial position. It might be preferable to simply walk up the peak nearest
+to the initial center. This has the advantage that it is possible to center
+on a line which has a nearby, stronger neighbor which cannot itself be used
+for some reason, but which might fall within \fBparextent\fR pixels of the
+starting center. The parabola algorithm as described also finds a local extrema
+rather than a local maximum; probably not what is desired for a dispersion
+solution. The restriction to 3 pixels in the final center determination is
+bad; the width of the centering function must be a variable to accommodate
+the wide range of samplings expected.
+
+The parabola algorithm described is basically a grid search over
+2*\fIparextent\fR pixels for the local extrema. What I am suggesting is
+an iterative gradient search for the local maximum. The properties of the
+two algorithms are probably sufficiently different to warrant implementation
+of both as an option (the running times are comparable). I suspect that
+everyone else who has done this will have their own favorite algorithm as
+well; probably we should study half a dozen but implement only one or two.
+
+.nh 2
+Field Flattening
+
+ It is not clear that we need special flat fielding operators for
+\fBonedspec\fR. We have a two-dimensional operator which fits image lines
+independently which might already do the job. Probably we should experiment
+with both the smoothing spline and possibly fourier filtering for removing
+the difficult medium frequency fluctuations. The current \fBimred\fR flat field
+operator implements the cubic smoothing spline (along with the Chebyshev and
+Legendre polynomials), and is available for experimentation.
+
+Building interactive graphics into the operator which fits a smooth curve to
+the continuum is probably not necessary. If a noninteractive \fBimred\fR or
+\fBimages\fR operator is used to fit the continuum the interactive graphics
+can still be available, but might better reside in a higher level CL script.
+The basic operator should behave like a subroutine and not write any output
+to the terminal unless enabled by a hidden parameter (we have been calling
+this parameter \fIverbose\fR in other programs).
+
+.nh 3
+Extinction Correction and Flux Calibration
+
+ I did not have time to review any of this.
+
+.nh
+Standard Library Packages
+
+ The following standard IRAF math library packages should be used in
+\fBonedspec\fR. The packages are very briefly described here but are
+fully documented under \fBhelp\fR on the online (kpnob:xcl) system.
+
+.nh 2
+Curve Fitting
+
+ The curve fitting package (\fBcurfit\fR) is currently capable of fitting
+the Chebyshev and Legendre polynomials and the cubic smoothing spline.
+Weighting is supported as an option.
+We need to add a piecewise linear function to support the
+dispersion curves for the high resolution FTS spectra. We may have to add a
+double precision version of the package to provide the 8-10 digits of
+precision needed for typical comparison line wavelength values, but
+normalization of the wavelength values may make this unnecessary for moderate
+resolution spectra.
+
+Ordinary polynomials are not supported because their numerical properties are
+very much inferior to those of orthogonal polynomials (the ls matrix can have
+a disastrously high condition number, and lacking normalization the function
+begin fitted is not invariant with respect to scale changes and translations
+in the input data). For low order fits the Chebyshev polynomials are
+considered to have the best properties from an approximation theoretic point
+of view, and for high order fits the smoothing spline is probably best because
+it can follow arbitrary trends in the data.
+
+.nh 2
+Interpolation
+
+ The image interpolation package (\fBiminterp\fR) currently supports the
+nearest neighbor, linear, third and fifth order divided differences,
+cubic interpolating spline, and sinc function interpolators.
+We should add the zeroth and first order partial pixel ("flux conserving")
+interpolants because they offer unique properties not provided by any
+of the other interpolants.
+
+.nh 2
+Interactive Graphics
+
+ We will define a standard interactive graphics utility package for
+interactive operations upon data vectors (to be available in a system library
+in object form). It should be possible to define a general package which
+can be used anywhere a data vector is to be plotted and
+examined interactively (not just in \fBonedspec\fR). Standard keystrokes
+should be defined for common operations such as expanding a region of
+the plot and restoring the original scale. This will not be attempted
+until an interactive version of the GIO interface is available later this
+fall.
+.endhelp
diff --git a/noao/onedspec/doc/sys/TODO b/noao/onedspec/doc/sys/TODO
new file mode 100644
index 00000000..0dfa136b
--- /dev/null
+++ b/noao/onedspec/doc/sys/TODO
@@ -0,0 +1,28 @@
+scombine:
+ 1. Combine with weights:
+ By signal level
+ By sigma spectrum
+
+doc:
+ Install SENSFUNC memo in the doc directory. (8/14)
+
+calibrate:
+ Have calibrate apply neutral density filter function. This may also
+ have to be included in STANDARD and SENSFUNC. (2/25/87)
+
+splot:
+ Add a deblend option for PCYGNI profiles. (Tyson, 3/19/87)
+
+Tim Heckman (U. Maryland) came by with questions and requests
+concerning deblending in SPLOT. Tim's comments are indicated in
+quotations.
+
+2. "The deblending should allow additional constraints if known.
+Specifically fixing the ratios of lines based on atomic physics."
+
+3. "The deblending should provide some uncertainty estimates." I added
+that there has also been a request to use known statistics in the
+pixel data themselves to generate uncertainty estimates.
+
+4. "It would be useful to provide other choices for the profile rather
+than just gaussians."
diff --git a/noao/onedspec/doc/sys/coincor.ms b/noao/onedspec/doc/sys/coincor.ms
new file mode 100644
index 00000000..1b4d29cc
--- /dev/null
+++ b/noao/onedspec/doc/sys/coincor.ms
@@ -0,0 +1,46 @@
+.EQ
+delim $$
+.EN
+.OM
+.TO
+IIDS Users
+.FR
+F. Valdes
+.SU
+IIDS count rate corrections
+.PP
+The IRAF task \fBcoincor\fR transforms the observed count rates to
+something proportional to the input count rate. The correction applied
+to the observed count rates depends upon the count rate and is instrument
+dependent. One correction common to photomultiplier detectors and the
+IIDS is for coincident events, which is the origin of the task name.
+The parameter \fIccmode\fR selects a particular type of correction.
+The value \fIccmode\fR = "iids" applies the following transformation to
+observed IIDS count rates.
+
+.EQ (1)
+ C sup ' ~=~(- ln (1- deadtime C)/ deadtime ) sup power
+.EN
+
+where $C$ is the orginal count rate, $C sup '$ is the corrected count
+rate, and $deadtime$ and $power$ are \fBcoincor\fR parameters. The term
+inside the parenthesis is the correction for dead-time in the counting
+of coincident events on the back phospher of the image tube. The power
+law correction is due to the non-linearity of the IIDS image tube chain.
+.PP
+The correction applied with the Mountain Reduction Code is only for
+coincidences, i.e. equation (1) with $power = 1$. To obtain just this
+correction with \fBcoincor\fR set $power = 1$. To take mountain reduced
+data and correct only for the non-linearity set \fIccmode\fR = "power".
+With raw IIDS data use \fBcoincor\fR with the default
+parameters.
+
+.LP
+References:
+.IP (1)
+L. Goad, \fBSPIE 172\fR, 1979, p. 86.
+.IP (2)
+G. Jacoby, Some Notes on the ONEDSPEC Package, \fBIRAF Handbook\fR
+.IP (3)
+P. Massey and J. De Veny, How Linear is the IIDS, \fBNOAO Newsletter\fR,
+#6, June 1986.
diff --git a/noao/onedspec/doc/sys/identify.ms b/noao/onedspec/doc/sys/identify.ms
new file mode 100644
index 00000000..6a69204b
--- /dev/null
+++ b/noao/onedspec/doc/sys/identify.ms
@@ -0,0 +1,347 @@
+.RP
+.TL
+Radial Velocity Measurements with IDENTIFY
+.AU
+Francisco Valdes
+.AI
+IRAF Group - Central Computer Services
+.K2
+P.O. Box 26732, Tucson, Arizona 85726
+August 1986
+Revised August 1990
+.AB
+The IRAF task \fBidentify\fP may be used to measure radial velocities.
+This is done using the classical method of determining
+the doppler shifted wavelengths of emission and absorption lines.
+This paper covers many of the features and techniques available
+through this powerful and versatile task which are not immediately
+evident to a new user.
+.AE
+.sp 3
+.NH
+\fBIntroduction\fP
+.PP
+The task \fBidentify\fP is very powerful and versatile. It can
+be used to measure wavelengths and wavelength shifts for
+doing radial velocity measurements from emission and
+absorption lines. When combined with the CL's ability
+to redirect input and output both from the standard text
+streams and the cursor and graphics streams virtually
+anything may be accomplished either interactively or
+automatically. This, of course, requires quite a bit of
+expertise and experience with \fBidentify\fP and with
+the CL which a new user is not expected to be aware of initially.
+This paper attempts to convey some of the possibilities.
+There are many variations on these methods which the user
+will learn through experience.
+.PP
+I want to make a caveat about the suggestions made in
+this paper. I wrote the \fBidentify\fP task and so I am
+an expert in its use. However, I am not a spectroscopist,
+I have not been directly involved in the science of
+measuring astronomical radial velocities, and I am not
+very familiar with the literature. Thus, the suggestions
+contained in this paper are based on my understanding of
+the basic principles and the abilities of the \fBidentify\fP
+task.
+.PP
+The task \fBidentify\fP is used to measure radial velocities
+by determining the wavelengths of individual emission
+and absorption lines. The user must compute the
+radial velocities separately by relating the observed
+wavelengths to the known rest wavelengths via the Doppler
+formula. This is a good method when the lines are
+strong, when there are only one or two features, and
+when there are many, possibly, weaker lines. The
+accuracy of this method is determined by the accuracy
+of the line centering algorithm.
+.PP
+The alternative method is to compare an observed spectrum
+to a template spectrum of known radial velocity. This
+is done by correlation or fourier ratio methods. These
+methods have the advantage of using all of the spectrum
+and are good when there are many very weak and possibly
+broad features. Their disadvantages are confusion
+with telluric lines, they don't work well with just a
+few real features, and they require a fair amount of
+preliminary manipulation of the spectrum to remove
+continuum and interpolate the spectrum in logarithmic
+wavelength intervals. IRAF tasks for correlation
+and fourier ratio methods are under development at
+this time. Many people assume that these more abstract
+methods are inherently better than the classical method.
+This is not true, it depends on the quality and type of
+data.
+.PP
+Wavelength measurements are best done on the original
+data rather than after linearizing the wavelength
+intervals. This is because 1) it is not necessary as
+will be shown below and 2) the interpolation used to
+linearize the wavelength scale can change the shape
+of the lines, particularly strong, narrow emission
+lines which are the best ones for determining radial
+velocities.
+.PP
+This paper is specifically about \fBidentify\fP but one
+should be aware of the task \fBsplot\fP which also may
+be used to measure radial velocities. It differs in
+several respects from \fBidentify\fP. \fBSplot\fP works
+only on linearized data; the wavelength and pixel
+coordinates are related by a zero point and wavelength
+interval. The line centering algorithms are different;
+the line centering is generally less robust (tolerant
+of error) and often less accurate. It has many nice
+features but is not designed for the specific purpose
+of measuring positions of lines and, thus, is not as
+easy to use for this purpose.
+.PP
+There are a number of sources of additional information
+relating to the use of the task \fBidentify\fP. The
+primary source is the manual pages for the task. As
+with all manual pages it is available online with the
+\fBhelp\fP command and in the \fIIRAF User Handbook\fP.
+The NOAO reduction guides or cookbooks for the echelle
+and IIDS/IRS include additional examples and discussion.
+The line centering algorithm is the most critical
+factor in determining dispersion solutions and radial
+velocities. It is described in more detail under the
+help topic \fBcenter1d\fP online or in the handbook.
+.NH
+Method 1
+.PP
+In this method, arc calibration images are used to determine
+a wavelength scale. The dispersion solution is then transferred
+to the object spectrum and the wavelengths of emission and
+absorption lines are measured and recorded. This is
+relatively straightforward but some tricks will make this easier
+and more accurate.
+.NH 2
+Transferring Dispersion Solutions
+.PP
+There are several ways to transfer the dispersion solution
+from an arc spectrum to an object spectrum differing in the
+order in which things are done.
+.IP (1)
+One way is to determine the dispersion solution for all the arc images
+first. To do this interactively specify all the arc images as the
+input to \fBidentify\fP. After determining the dispersion solution for
+the first arc and quitting (\fIq\fP key) the next arc will be displayed
+with the previous dispersion solution and lines retained. Then use the
+cursor commands \fIa\fP and \fIc\fP (all center) to recenter and
+\fIf\fP (fit) to recompute the dispersion solution. If large shifts
+are present use \fIs\fP (shift) or \fIx\fR (correlate peaks) to shift,
+recenter, and compute a wavelength zero point shift to the dispersion
+function. A new dispersion function should then be fit with \fIf\fP.
+These commands are relatively fast and simple.
+.IP
+An important reason for doing all the arc images first
+is that the same procedure can be done mostly noninteractively
+with the task \fBreidentify\fP. After determining a
+dispersion solution for one arc image \fBreidentify\fP
+does the recenter (\fIa\fP and \fIc\fP), shift and
+recenter (\fIs\fP), or correlation features, shift, and
+recenter (\fIx\fP) to transfer the dispersion solutions
+between arcs. This is usually done as a background task.
+.IP
+To transfer the solution to the object spectra specify
+the list of object spectra as input to \fBidentify\fP.
+For each image begin by entering the colon command
+\fI:read arc\fP where arc is the name of the arc image
+whose dispersion solution is to be applied; normally
+the one taken at the same time and telescope position as
+the object. This will read the dispersion solution and arc
+line positions. Delete the arc line positions with the
+\fIa\fP and \fId\fP (all delete) cursor keys. You
+can now measure the wavelengths of lines in the spectrum.
+.IP (2)
+An alternative method is to interactively alternate between
+arc and object spectra either in the input image list or
+with the \fI:image name\fP colon command.
+.NH 2
+Measuring Wavelengths
+.IP (1)
+To record the feature positions at any time use the \fI:features
+file\fP colon command where \fIfile\fP is where the feature
+information will be written. Repeating this with the same
+file appends to the file. Writing to the database with the
+\fI:write\fP colon command also records this information.
+Without an argument the results are put in a file with
+the same name as the image and a prefix of "id". You
+can use any name you like, however, with \fI:write
+name\fP. The \fI:features\fP command is probably preferable
+because it only records the line information while the
+database format includes the dispersion solution and
+other information not needed for computing radial
+velocities.
+.IP (2)
+Remember that when shifting between emission and absorption
+lines the parameter \fIftype\fP must be changed. This may be done
+interactively with the \fI:ftype emission\fP and \fI:ftype
+absorption\fP commands. This parameter does not need to be
+set except when changing between types of lines.
+.IP (3)
+Since the centering of the emission or absorption line is the
+most critical factor, one should experiment with the parameter
+\fIfwidth\fP. To change this parameter type \fI:fwidth value\fP.
+The positions of the marked features are not changed until a
+center command (\fIc\fP) command is given.
+.IP
+A narrow \fIfwidth\fP is less influenced by blends and wings but
+has a larger uncertainty. A broad \fIfwidth\fP uses all of the
+line profile and is thus stable but may be systematically influenced
+by blending and wings. One possible approach is to measure
+the positions at several values of \fIfwidth\fP and decide which
+value to use or use some weighting of the various measurements.
+You can record each set of measurements with the \fI:fe
+file\fP command.
+.IP (4)
+For calibration of systematic effects from the centering one should
+obtain the spectrum of a similar object with a known radial
+velocity. The systematic effect is due to the fact that the
+centering algorithm is measuring a weighted function of the
+line profile which may not be the true center of the line as
+tabulated in the laboratory or in a velocity standard. By
+using the same centering method on an object with the same line
+profiles and known velocity this effect can be eliminated.
+.IP (5)
+Since the arcs are not obtained at precisely the same time
+as the object exposures, there may be a wavelength shift relative
+to the arc dispersion solution. This may be calibrated from
+night sky lines in the object itself (the night sky lines are
+"good" in this case and should not be subtracted away). There are
+generally not enough night sky lines to act as the primary
+dispersion calibrator but just one can determine a possible
+wavelength zero point shift. Measure the night sky line
+positions at the same time the object lines are measured.
+Determine a zero point shift from the night sky to be
+taken out of the object lines.
+.NH
+Method 2
+.PP
+This method is similar to the correlation method in that a
+template spectrum is used and the average shift relative
+to the template measures the radial velocity. This has the
+advantage of not requiring the user to do a lot of calculations
+(the averaging of the line shifts is done by identify) but is
+otherwise no better than method 1. The template spectrum must
+have the same features as the object spectrum.
+.IP (1)
+Determine a dispersion solution for the template spectrum
+either from the lines in the spectrum or from an arc calibration.
+.IP (2)
+Mark the features to be correlated in the template spectrum.
+.IP (3)
+Transfer the template dispersion solution and line positions
+to an object spectrum using one of the methods described
+earlier. Then, for the current feature, point the cursor near
+the same feature in the object spectrum and type \fIs\fP. The
+mean shift in pixels, wavelength, and fractional wavelength (like
+a radial velocity without the factor of the speed of light)
+for the object is determined and printed. A new dispersion
+solution is determined but you may ignore this.
+.IP (4)
+When doing additional object spectra, remember to start over
+again with the template spectrum (using \fI:read template\fP)
+and not the solution from the last object spectrum.
+.IP (5)
+This procedure assumes that the dispersion solution between
+the template and object are the same. Checks for zero point
+shifts with night sky lines, as discussed earlier, should be
+made if possible. The systematic centering bias, however, is
+accounted for by using the same lines from the template radial
+velocity standard.
+.IP (6)
+One possible source of error is attempting to use very weak
+lines. The recentering may find the wrong lines and affect
+the results. The protections against this are the \fIthreshold\fP
+parameter and setting the centering error radius to be relatively small.
+.NH
+Method 3
+.PP
+This method uses only strong emission lines and works with
+linearized data without an \fBidentify\fP dispersion
+solution; though remember the caveats about rebinning the
+spectra. The recipe involves measuring
+the positions of emission lines. The
+strongest emission lines may be found automatically using
+the \fIy\fP cursor key. The number of emission lines to
+be identified is set by the \fImaxfeatures\fP parameter.
+The emission line positions are then written to a data file
+using the \fI:features file\fP colon command. This may
+be done interactively and takes only a few moments per
+spectrum. If done interactively, the images may be chained
+by specifying an image template. The only trick required
+is that when proceeding to the next spectrum the previous
+features are deleted using the cursor key combination \fIa\fP
+and \fId\fP (all delete).
+.PP
+For a large number of images, on the order of hundreds, this
+may be automated as follows. A file containing the cursor
+commands is prepared. The cursor command format consists
+of the x and y positions, the window (usually window 1), and
+the key stroke or colon command. Because each new image from
+an image template does not restart the cursor command file,
+the commands would have to be repeated for each image in
+the list. Thus, a CL loop calling the task each time with
+only one image is preferable. Besides redirecting the
+cursor input from a command file, we must also redirect the
+standard input for the response to the database save query, the
+standard output to discard the status line information, and ,
+possibly, the graphics to a metacode file which can then be
+reviewed later. The following steps indicate what is to be
+done.
+.IP (1)
+Prepare a file containing the images to be measured (one per line).
+This can usually be done using the sections command to expand
+a template and directing the output into a file.
+.IP (2)
+Prepare a cursor command file (let's call it cmdfile)
+containing the following two lines.
+.RS
+.IP
+.nf
+.ft CW
+1 1 1 y
+1 1 1 :fe positions.dat
+.ft P
+.fi
+.RE
+.IP (3)
+Enter the following commands.
+.RS
+.IP
+.nf
+.ft CW
+list="file"
+while (fscan (list,s1) !=EOF){
+print ("no") \(or identify (sl,maxfeatures=2, cursor="cmdfile",
+>"dev$null", >G "plotfile")
+}
+.ft P
+.fi
+.RE
+.LP
+Note that these commands could be put in a CL script and executed
+using the command
+.sp
+.IP
+.ft CW
+on> cl <script.cl
+.ft P
+.sp
+.PP
+The commands do the following. The first command initializes the
+image list for the loop. The second command is the loop to
+be run until the end of the image file is reached. The
+command in the loop directs the string "no" to the standard
+input of identify which will be the response to the database save
+query. The identify command uses the image name obtained from the list
+by the fscan procedure, sets the maximum number of features to be
+found to be 2 (this can be set using \fBeparam\fP instead), the
+cursor input is taken from the cursor command file, the standard
+output is discarded to the null device, and the STDGRAPH output
+is redirected to a plot file. If the plot file redirection is
+not used, the graphs will appear on the specified graphics
+device (usually the graphics terminal). The plot file can then
+be disposed of using the \fBgkimosaic\fP task to either the
+graphics terminal or a hardcopy device.
diff --git a/noao/onedspec/doc/sys/onedproto.ms b/noao/onedspec/doc/sys/onedproto.ms
new file mode 100644
index 00000000..b1b05201
--- /dev/null
+++ b/noao/onedspec/doc/sys/onedproto.ms
@@ -0,0 +1,1673 @@
+.RP
+.ND
+.TL
+Some Notes on the ONEDSPEC Package
+.AU
+G. Jacoby
+.AI
+.K2 "" "" "*"
+June 1985
+.AB
+The first phase of the ONEDSPEC prototype package is complete.
+Comments and some internal description is presented for each task
+in the package. Also presented are some more global descriptions
+of strategies used in the package and considerations for future
+improvements.
+.AE
+.SH
+1. Why is ONEDSPEC Different?
+.PP
+This section describes some of the ways in which the ONEDSPEC
+package diverges from other IRAF package strategies.
+A few of these should someday be modified to more closely
+adhere to IRAF conventions, but in other cases, restrictions
+or limitations in the IRAF system are revealed.
+.sp 1
+.SH
+Quantity
+.PP
+One of the major differences between a two dimensional image processing
+package and a one dimensional package is that spectra
+frequently congregate in groups of hundreds to thousands while two-dimensional
+images live in groups of tens to hundreds. What this means is that spectral
+processing must be somewhat more automated and streamlined - the software cannot
+rely on user input to provide assistance and it cannot afford
+excessive overhead; otherwise a large fraction of the processing time will be
+spent where it is least useful.
+.PP
+To process large volumes of spectra in a reasonably automated fashion,
+the software must be smart enough to know what to do with a variety
+of similar but different spectra. The way adopted here is to key
+off header parameters which define the type of spectrum and the
+processing required. In fact, most of the ONEDSPEC package will not
+work smoothly without some header parameter information.
+.PP
+It is also important that each task be self-reliant so that the
+overhead of task stop and restart is avoided. For many operations,
+the actual computation time is a fraction of a second, yet no
+operation in the ONEDSPEC package is faster than one second per spectrum
+due to task overhead. If task startup and stop were required for each
+spectrum, then the overhead would be much worse.
+.PP
+So the philosophy is one in which each task uses as much information
+as it can reasonably expect from the spectral image header.
+Usually this is not more than three or four elements.
+The strategy of using header information should not be limited to
+ONEDSPEC. Many image processing problems can be automated
+to a large degree if header information is used. The success of
+the KPNO CCD Mountain reduction system emphasizes this point.
+It would seem prudent that other IRAF applications make use of
+such information when possible.
+[See section 3 for a more detailed discussion of headers.]
+.sp 1
+.SH
+Spectral Image Names
+.PP
+One implication of the quantity problem is that it must be easy for the user to
+specify the names of large numbers of spectra. The approach taken for ONEDSPEC
+was to assign a root name to a group of spectra and then
+append an index number of 4 or more digits starting with 0000.
+So spectra, by default, have the form root.0000, root.0001, ...
+To specify the spectra, the user types only the root name and the range
+of indices such as "root" and "0-99,112-113,105-108".
+The range decoder accesses the spectral indices in the order given
+as opposed to access in ascending order, so that the spectrum root.0112
+will be processed before root.0105 in the example specification above.
+Spectra having more general names may be specified using the
+standard IRAF filename expansion methods if the
+the range specification is given as null.
+.PP
+The specification of large numbers of images is an area where
+most IRAF applications are weak. Resorting to odd combinations
+of bracket and backslash characters in filename specifications
+is obscure to new users and still fails to
+meet the general need. The range specification adopted for ONEDSPEC
+comes closer but introduces a fixed image name format.
+.sp 1
+.SH
+Apertures -- A way to group data
+.PP
+Many spectrographs generate multiple spectra simultaneously by
+placing more than one slit or aperture in the focal plane.
+Examples include the IIDS, IRS, and Cryogenic Camera in use
+at Kitt Peak. The Echelle may be considered a multi-aperture
+instrument for purposes of reductions by associating each order
+with an "aperture" number.
+.PP
+The concept of aperture can be generalized to indicate a set of
+spectral data having common group properties such as
+wavelength coverage. Most tasks in ONEDSPEC will key off
+an aperture number in the image header and treat those
+common aperture spectra uniformly.
+Defining data groups which are to be processed in this fashion
+is a technique not generally exploited by reduction programs.
+This is due in part to the problem of image header usage.
+.PP
+For programming convenience and to avoid an additional level
+of indirectness, in ONEDSPEC the aperture number is used directly as an
+index in many static arrays. The current implementation has
+a declaration for 50 apertures and due to the IIDS/IRS
+notation of apertures 0 and 1, the apertures are zero-indexed, contrary
+to standard IRAF nomenclature,
+from 0-49. It would certainly be better to map the aperture numbers
+to the allowable index range, but the added complexity of another
+level of indirectness seemed distracting. Actually the mapping
+can still be done by the header reader, "load_ids_hdr", and
+unmapped by the header writer, "store_keywords".
+.sp 1
+.SH
+Static versus dynamic arrays
+.PP
+Although dynamic storage would alleviate some of the memory
+requirements in the package, the use of static arrays aids
+readability and accounts for only about 10 percent of the total
+task memory space. Many of the arrays are arrays of pointers.
+For example, in the task BSWITCH, there is an array (called "imnames")
+of pointers for the names of spectral images, several for each aperture.
+The actual space for the names is dynamically allocated,
+so first we allocate an array of pointers for each
+aperture:
+.sp 1
+.DS
+ call salloc (imnames[aperture], nr_names, TY_POINT)
+.DE
+.sp 1
+Then, for each of these pointers, space must be allocated for the
+character arrays:
+.sp 1
+.DS
+ do i = 1, nr_names
+ call salloc (Memp[imnames[aperture]+i-1], SZ_LINE, TY_CHAR)
+.DE
+.sp 1
+Later to access the character strings, a name is specified as:
+.sp 1
+.DS
+ Memc[Memp[imnames[aperture]+nr_of_this_spectrum-1]]
+.DE
+.sp 1
+If the "imnames" array was also dynamically allocated, the
+above access would be even less readable.
+If memory requirements become a serious problem, then these ONEDSPEC
+tasks should be modified.
+.sp 1
+.SH
+Output image names
+.PP
+To retain the consistent usage of root names and ranges, output
+spectra also have the form root.nnnn. For user convenience,
+the current output root name and next suffix are maintained as
+package parameters onedspec.output and onedspec.next_rec.
+The latter parameter is automatically updated each time a
+new spectrum is written. This is done by the individual tasks
+which directly access this package parameter.
+.PP
+There is an interesting side effect when using indirect parameters
+(e.g. )onedspec.output) for input. In the local task parameter
+file, the mode of the parameter must be declared hidden. So when the user
+does an "lpar task", those parameters appear to be unnecessary
+(that is, they are enclosed in parenthesis). When run,
+prompts appear because the parameter is an automatic mode
+parameter in the package parameter file.
+If run as a background task, this is more annoying.
+Unfortunately, any other choice of parameter modes produces
+less desirable actions.
+.sp 1
+.SH
+ONEDUTIL
+.PP
+As the number of tasks in ONEDSPEC started growing, the
+need for a subdivision of the package became clear.
+The first cut was made at the utility level, and a number
+of task names (not necessarily physical tasks) were
+moved out into the ONEDUTIL submenu. In the future,
+additional tasks will eventually require another subpackage.
+.PP
+Actually, many of the tasks in ONEDUTIL may be more at home
+in some other package, but a conscious effort was made to
+avoid contaminating other IRAF packages with tasks written for
+the ONEDSPEC project. If all the following tasks are relocated,
+then the need for ONEDUTIL is reduced.
+.PP
+Two of the entries in ONEDUTIL may be considered as more appropriate
+to DATAIO - RIDSMTN and WIDSTAPE. In fact RIDSMTN can
+replace the version currently in DATAIO. WIDSTAPE may replace the
+DATAIO task WIDSOUT if the usage of header parameters does not
+present a problem.
+.PP
+The task MKSPEC may be a candidate for the ARTDATA package.
+It should be enhanced to include optional noise generation.
+Also, it may be appropriate for SINTERP to replace INTERP
+in the UTILITY package.
+.PP
+I suppose one could argue that SPLOT belongs in the PLOT package.
+Certainly, the kludge script BPLOT should be replaced by a more
+general batch plot utility in PLOT.
+Also, the two task names, IDENTIFY and REIDENTIFY are present
+in the ONEDSPEC menu for user convenience, but the task declarations
+in ONEDSPEC.CL refer to tasks in the LONGSLIT package.
+.PP
+Because ONEDUTIL is a logical separation of the tasks, not
+a complete physical task breakup, there is no subdirectory
+for ONEDUTIL as there is in other packages. This is a bit messy
+and it may be best to completely disentangle the tasks in the
+subpackage into a true package having all the implications.
+.LP
+.SH
+2. Task Information
+.PP
+There are currently about 30 tasks in the ONEDSPEC package.
+These are summarized in the menu listing below and
+a brief description of some less obvious aspects of each follows.
+.sp 1
+.DS L
+ ONEDSPEC
+
+ addsets - Add subsets of strings of spectra
+ batchred - Batch processing of IIDS/IRS spectra
+ bswitch - Beam-switch strings of spectra to make obj-sky pairs
+ calibrate - Apply sensitivity correction to spectra
+ coincor - Correct spectra for photon coincidence
+ dispcor - Dispersion correct spectra
+ extinct - Correct data for atmospheric extinction
+ flatfit - Sum and normalize flat field spectra
+ flatdiv - Divide spectra by flat field
+ identify - Identify features in spectrum for dispersion solution
+ iids - Set reduction parameters for IIDS
+ irs - Set reduction parameters for IRS
+ onedutil - Enter ONEDSPEC Utility package
+ process - A task generated by BATCHRED
+ reidentify- Automatically identify features in spectra
+ sensfunc - Create sensitivity function
+ slist - List spectral header elements
+ splot - Preliminary spectral plot/analysis
+ standard - Identify standard stars to be used in sensitivity calc
+ subsets - Substract pairs in strings of spectra
+
+ ONEDUTIL
+
+ bplot - Batch plots of spectra
+ coefs - Extract mtn reduced ceofficients from henear scans
+ combine - Combine spectra having different wavelength ranges
+ lcalib - List calibration file data
+ mkspec - Generate an artificial spectrum
+ names - Generate a list of image names from a string
+ rebin - Rebin spectra to new dispersion parameters
+ ridsmtn - Read IIDS/IRS mountain format tapes
+ sinterp - Interpolate a table of x,y pairs to create a spectrum
+ widstape - Write Cyber format IDSOUT tapes
+.DE
+.sp 1
+.SH
+ADDSETS
+.PP
+Spectra for a given object may have been observed through more than
+one instrument aperture. For the IIDS and IRS, this is the most common
+mode of operation. Both apertures are used to alternately observe
+the program objects.
+.PP
+Each instrument aperture may be considered an
+independent instrument having unique calibration properties, and
+the observations may then be processed completely independently
+until fully calibrated. At that point the data may be combined to
+improve signal-to-noise and reduce systematic errors associated
+with the alternating observing technique. Because the data are
+obtained in pairs for IIDS and IRS (but may be obtained in groups
+of larger sizes from other instruments), ADDSETS provides a way
+to combine the pairs of observations.
+.PP
+Each pair in the input string is added to produce a single output
+spectrum. Although the word "pair" is used here, the parameter
+"subset" defines the number of elements in a "pair" (default=2).
+The input string is broken down into groups where each group
+consists of the pair of spectra defined in order of the input
+list of image names.
+.PP
+"Add" in ADDSETS means:
+.RS
+.IP 1.
+Average the pairs if the data are calibrated to flux (CA_FLAG=0)
+optionally weighted by the integration time.
+.IP 2.
+Add the pairs if uncalibrated (CA_FLAG=-1).
+.RE
+.sp 1
+.SH
+BATCHRED
+.PP
+This is a script task which allows spectra from dual aperture instruments
+to be processed completely in a batch mode after the initial wavelength
+calibration and correction has been performed. The processes which
+may be applied and the tasks referenced are:
+.RS
+.IP 1.
+Declaring observations as standard stars for flux calibration (STANDARD).
+.IP 2.
+Solving for the sensitivity function based on the standard stars (SENSFUNC).
+.IP 3.
+Generating object minus sky differences and summing individual
+observations if several were made (BSWITCH).
+.IP 4.
+Correcting for atmospheric extinction (BSWITCH).
+.IP 5.
+Applying the system sensitivity function to generate flux calibrated
+data (CALIBRATE).
+.IP 6.
+Adding pairs of spectra obtained through the dual apertures (ADDSETS).
+.RE
+Any or all of these operations may be selected through the task
+parameters.
+.PP
+BATCHRED generates a secondary script task called PROCESS.CL
+which is a text file containing constructed commands to the
+ONEDSPEC package. This file may be edited by the user if an
+entry to BATCHRED is incorrect. It may also be saved, or appended
+by further executions of BATCHRED.
+.PP
+BATCHRED also generates a log file of the output generated by the
+ONEDSPEC tasks it calls.
+.sp 1
+.SH
+BSWITCH
+.PP
+This task combines multiple observations of a single object
+or multiple objects taken through a multiaperture instrument.
+Object minus sky differences are generated as pairs of
+spectra are accumulated, then optionally corrected for
+atmospheric extinction, and the differences added together
+with optional weighting using counting statistics.
+Each instrument aperture is considered an independent
+device.
+.PP
+Despite the apparently simple goal of this task, it is probably
+the most complicated in the ONEDSPEC package due to the
+bookkeeping load associated with automated handling of large data sets
+having a number of properties associated with each spectrum (e.g
+object or sky, aperture number, exposure times).
+.PP
+There are several modes in which BSWITCH can operate. The mode
+appropriate to the IIDS and IRS assumes that the spectra
+are input in an order such that after 2N (N=number of
+instrument apertures) spectra have been
+accumulated, an equal number of object and sky spectra have been
+encountered in each aperture.
+When in this mode, a check is made after 2N spectra
+have been processed, and the optional extinction correction is
+applied to the differences of the object minus sky, and then
+(optionally weighted and) added into an accumulator for the aperture.
+.PP
+If the IIDS mode is switched off, then no guarantee can be
+made that sky and object spectra pair off. If extinction
+correction is required, it is performed on each spectrum
+as it arrives, including sky spectra if any. The spectra are
+then added into separate accumulators for object and sky for
+each aperture after optional weighting is applied.
+.PP
+If after all spectra have been processed, there are no sky
+spectra, the object spectrum is written out. If there is no
+object spectrum, the sky spectrum is written out after
+multiplying by -1. (This allows adding an object later on with
+addsets, but the -1 multiply is probably a mistake.)
+If at least one of each, object and sky spectra were encountered,
+then the difference is computed and written out. Since
+all accumulations are performed in count rates and later converted
+back to counts, the object and sky spectra may have different
+exposure times (non IIDS mode only).
+.PP
+A statistics file is maintained to provide an indication of the
+quality of the individual spectra going into the sum. The
+statistics information is maintained internally and only
+written out after the sums have been generated.
+The basic data in the file is the count rate of the spectrum
+having the largest count rate, and the ratios of the count rates from
+all other spectra to that one.
+.PP
+If weighting is selected, the weights are taken as proportional to
+the count rate (prior to extinction correction) over a wavelength
+delimited region of the spectrum. (Perhaps the weight
+should be proportional to counts, not count rate.)
+The default wavelength region is the entire spectrum.
+If the total count rate is negative, the weight is assigned
+a value of 0.0 and will be disregarded in the sum. (The counts
+may be negative if the object minus sky difference approaches zero
+on a bright and cloudy night.)
+.PP
+If extinction is selected, an extinction table is read from the
+package calibration file. An optional additive term may be applied
+as computed by the system sensitivity task SENSFUNC which is placed
+in the parameter sensfunc.add_const. A revision to the standard
+extinction table (delta extinction as a function of wavelength)
+may be read from a text file whose name is specified by the parameter
+sensfunc.rev_ext_file. The file format is that of a text file
+having pairs of (wavelength, delta extinction) on each line.
+[The option to solve for this function in SENSFUNC has not yet been
+implemented, but BSWITCH can read the file that would be generated.
+Thus, one can experiment with revisions, although this has never been
+tested.] BSWITCH will interpolate the values given in the file
+so that a course estimate of the revision may be entered, say if the
+deltas at U, B, V, R, and I are known.
+.PP
+BEWARE that the extinction correction is performed assuming the
+header parameters used for airmass refer to a "mean" airmass value
+for the exposure. In general the header value is wrong! It usually
+refers to the beginning, middle, or end of the exposure. I have
+never seen a header airmass value which was an equivalent airmass
+for the duration of the exposure. This is partly because there is
+no way to compute a single effective airmass; it is a function
+of wavelength, telescope position as a function of time, and
+the extinction function. Fortunately, for most observations
+this is not very significant. But anyone taking a one hour exposure near
+3500 Angstroms at airmass values greater than 2, should not complain
+when the fluxes look a bit odd.
+.sp 1
+.SH
+CALIBRATE
+.PP
+Having a system sensitivity function allows the data to be
+placed on an absolute flux scale. CALIBRATE performs this
+correction using the output sensitivity function from SENSFUNC. Operations are
+keyed to the instrument aperture, and a system sensitivity
+function is required for each observing aperture, although
+this requirement may be overriden.
+.PP
+A valid exposure time is required (a value of 1.0 should
+probably be assumed if not present) to compute the observed
+count rate. Input counts are transformed to units of
+ergs/cm2/sec/Angstrom (or optionally ergs/cm2/sec/Hz).
+CALIBRATE will calibrate two dimensional images as well, applying the
+sensitivity function to all image lines.
+.PP
+The operation is performed on a pixel-by-pixel basis so that
+the defined sensitivity function should overlap precisely
+with data in terms of wavelength.
+.sp 1
+.SH
+COINCOR
+.PP
+This task applies a statistical correction to each pixel
+to account for undetected photoevents as a result of
+coincidental arrival of photons. This is a detector
+specific correcton, although the photoelectric detector
+model provides a reasonable correction for many detectors
+when a judicious value for the deadtime parameter is chosen.
+This model assumes that the correction follows the
+typical procedures applied to photoelectric photometer data:
+.sp 1
+.DS L
+ Ic = Io * exp [Io * dt / T]
+.DE
+.sp 1
+where Ic is the corrected count rate in a pixel, Io is the
+observed count rate in that pixel, dt is the detector deadtime,
+and T is the observation integration time.
+.PP
+In addition to the photoelectric model, a more accurate model
+is available for the IIDS and is included in COINCOR. This
+model is taken from Goad (1979, SPIE Vol 172, 86.) and the correction
+is applied as:
+.sp 1
+.DS L
+ Ic = ln [1 - Io * t] / t
+.DE
+.sp 1
+where t is sweep time between pixel samples (t=1.424 msec).
+The IIDS differs from a photomultiplier detector, in that
+there is a fixed rate at which each pixel is sampled due to
+time required for the dissector to sweep across the image tube
+phospher whether a photoevent has occurred in a pixel or not.
+The photomultiplier plus discriminator system
+assumes that once a photoevent has been recorded, the detector is
+dead until a fixed interval has elapsed.
+.sp 1
+.SH
+DISPCOR
+.PP
+If a relation is known linking pixel coordinate to user coordinate
+(i.e. wavelength as a function of pixel number), then any non-linearities
+can be removed by remapping the pixels to a linear wavelength coordinate.
+This procedure, dispersion correction, is complicated by the
+lack of a wavelength-pixel solution which is derived from data simultaneously
+obtained with the object data. Any drifts in the detector then require
+an interpolation among solutions for the solution appropriate to
+the object observations. Depending on the detector, this interpolation
+may be a function of the time of observation, temperature, or some telescope
+parameter such as airmass.
+When multiple solutions are available, DISPCOR will linearly interpolate
+the solution in any available header parameter known to ONEDSPEC (see
+section 3).
+.PP
+Each solution is read from the database file created by the IDENTIFY
+task (in TWODSPEC$LONGSLIT), and the image name leading to that solution
+is also read from the database file. The image is opened to extract
+the header parameter to be used in the above interpolation.
+A null name for the interpolation parameter indicates that none
+is to be used. In this case, one of the options on the "guide"
+parameter should be set to indicate what solution should be used.
+The guide may be "precede", "follow", or "nearest" to select
+the most appropriate choice for each spectrum.
+.PP
+If an explicit wavelength solution is to be used, the parameter
+"reference" may be used to specify the image name of a comparison
+spectrum to be used as the reference for the wavelength solution.
+In this case all spectra will be corrected using a single solution -
+no flexure correction will be applied.
+.PP
+If the parameter to be used for interpolation is a "time-like"
+variable, such as RA, UT, ST, then the variable is discontinuous
+at 24|0 hours. If UT is the chosen parameter (as has been the
+case for IIDS and IRS spectra), the discontinuity occurs at
+5 PM local Kitt Peak time. A comparison spectrum taken at 4:59PM
+(=23:59h UT, =just before dinner), will be treated as an "end of
+the night" observation rather than a beginning of the night
+observation. To circumvent this error, the parameter, "time_wrap",
+can be specified to a time at which a true zero should be assigned.
+For UT at Kitt Peak, a choice like 17h UT (=10AM local, =asleep),
+is an unlikely hour for nighttime observations to be made. Then for
+a given night's observations, 17h UT becomes the new zero point in time.
+.PP
+Each solution in the database may be any of the forms legal
+to IDENTIFY: legendre, chebyshev, spline3, or spline1 - the form
+is encoded in the database and will automatically be recalled.
+The interpolation in the solution is performed by locating the
+pixel location for each required wavelength for the two
+solutions bounding each observation and linearly interpolating
+for the appropriate pixel location. One cannot simply interpolate
+across the coefficients of the solutions to derive a new
+single solution because the solutions may have different forms
+or orders, so that the coefficients may have quite different
+meanings.
+.PP
+Dispersion correction requires that there be equal intervals
+of wavelength between pixels. The wavelength solution
+is of a form describing the wavelength for a given pixel location,
+not a pixel location for a given wavelength. So the solution
+must be inverted.
+.PP
+The inversion to pixel location for wavelength is done in the
+following way: The pixel coordinate in the solution is incremented
+until the desired wavelength is bounded. The pixel value for the
+desired wavelength is obtained by linearly interpolating across these
+two bounding pixel locations. A linear approximation appears to be
+very good for typical solutions, providing proper pixel locations to
+better than 0.01 pixels. An improvement may be obtained by
+increasing the order of the interpolation, but the improvement
+is generally not warranted because the wavelength solutions
+are rarely known to this accuracy. [Note that the use of real
+and not double precision limits the precision of this technique!
+For spectra longer than 50,000 pixels, the errors due to
+the precision of reals can be serious.]
+.PP
+Note that a transformation to
+a wavelength coordinate which is linear in the logarithm of
+wavelength only requires that the inversion occur at wavelengths
+selected by equal increments in the logarithm of wavelength.
+.PP
+During the actual remapping, 5 possible techniques are available.
+Actually there are only two techniques: re-interpolation in 4 flavors,
+and rebinning by partial pixel summation. The re-interpolation
+may be performed with polynomials of order 1 (=linear), 3, or 5,
+or by a cubic spline. The 3rd and 5th order polynomials may introduce
+some ringing in the wings of strong, sharp, features, but the 5th order
+is good at preserving the high frequency component of the data.
+The linear and spline interpolators introduce significant smoothing.
+The rebinning algorithm offers conservation of flux but also smooths
+the data. In fact, rebinning to a course grid offers a good smoothing
+algorithm.
+.PP
+At some future date, it would be a good idea to include a "synch"
+function interpolator in the image interpolator package. This would
+be a little slower to process, but results in very good frequency
+response.
+.PP
+Other options in DISPCOR include "ids_mode" which forces spectra
+from all apertures to a single output mapping (starting wavelength
+and pixel-to-pixel increment), and "cols_out" forces the output spectra
+to a specified length, zero-filling if necessary.
+.PP
+DISPCOR will correct two-dimensional data by applying the
+remapping to all lines in the image. If the input two-dimensional
+spectrum has only one line, the output spectrum will be written as
+a one-dimensional spectrum.
+.sp 1
+.SH
+EXTINCT
+.PP
+Extinction is currently only available as a script file which drives
+BSWITCH. This is possible by suppressing all options: weighting,
+ids_mode, statistics file, and setting the subset pair size to the
+number of instrument apertures.
+.sp 1
+.SH
+FLATDIV
+.PP
+This task divides the specified spectra by their flat field spectra.
+This is not much more than an "en mass" spectrum divider, with the
+exceptions that the header elements are used to key on the
+aperture number so that the appropriate flat field spectrum is used,
+and that the header processing flags are checked to prevent
+double divisions and subsequently set after the division. Also,
+division by zero is guarded by setting any zeroes in the flat field
+spectrum to 1.0 prior to the division.
+.sp 1
+.SH
+FLATFIT
+.PP
+Pixel-to-pixel variations in the detector response can be removed
+by dividing all observations by a flat field spectrum.
+Flat field spectra are generally obtained by observing a source
+having a continuous energy distribution, such as a tungsten filament
+lamp. This is sometimes called a "quartz" lamp when the enclosing
+glass bulb is made with quartz rather than silicon. The quartz
+enclosure transmits ultraviolet light much better than glass.
+.PP
+If the color temperature of the source is very low (or very high, though
+this is extremely unlikely), then a color term would be introduced
+into the data when the flat is divided into the data.
+Large scale variations in the system sensitivity also introduce a
+color term into the flat - the same variations that are introduced into
+any spectrum taken with the system. [Large scale variations are
+evaluated by STANDARD and SENSFUNC, and removed by CALIBRATE.]
+This is not of any particular importance except that counting
+statistics are destroyed by the division.
+.PP
+To preserve the statistics, many find it desirable to divide by a flat
+field spectrum which has been filtered to remove any large scale variations
+but in which the pixel-to-pixel variations have been retained.
+A filtered flat can be obtained by fitting a low order polynomial
+through the spectrum and dividing the spectrum by the polynomial.
+The result is a spectrum normalized to 1.0 and having high frequency
+variations only. If one does not care to preserve the statistics,
+then this procedure is not required. In fact, for certain instruments
+(the IRS), the fitting and normalizing procedure is not recommended
+because some intermediate order curvature can be introduced.
+.PP
+The purpose of FLATFIT is to find the combination of parameters
+which produces a well flattened flat with a minimum of wiggles.
+The usual curve fitting package is used to fit a function (chebyshev,
+legendre, spline3, spline1) to the flats. Pixel rejection is
+user selectable by a choice of cutoff sigmas, both above and below
+the mean, and an optional growing region [A growing region is the number
+of pixels on either side of one rejected which will also be rejected -
+Growing regions are not recommended for most spectral applications].
+Any number of iterations may be used to further reject discrepant
+pixels. The fitting may be performed interactively and controlled by cursor
+keystrokes to select the fitting order, and other fit parameters.
+.PP
+Prior to the fit, the specified spectra are read, optionally corrected
+for coincidence losses, and added to accumulators appropriate to
+their instrument apertures. Each aperture is treated independently,
+except that, the interactive fitting mode may be selected to operate
+on the first aperture only, and then apply the same fitting parameters
+to all other aperture accumulations. Or the interactive procedure
+may be selected to operate on all apertures or none.
+.PP
+After the fit has been done, the fit is divided into the accumulation
+and written as a new spectrum having a specified root name and a trailing
+index indicating the aperture.
+.sp 1
+.SH
+IDENTIFY
+.PP
+This task (written by Frank Valdes) is used to identify features
+in the comparison arcs to be used in the solution for a wavelength calibration.
+The solution is performed interactively for at least one spectrum
+and then optionally in a batch mode using REIDENTIFY.
+IDENTIFY writes to a database file which will contain the solutions
+generated from each input comparison spectrum. The database is
+later used by DISPCOR to correct spectra according to the solution.
+.sp 1
+.SH
+IIDS
+.PP
+This script file initializes several hidden parameters in a
+variety of tasks to values appropriate to the IIDS instrument.
+There is also a script for the IRS. There should probably be
+a script for resetting the parameters to a default instrument.
+These parameters are:
+.RS
+.IP 1.
+onedspec.calib_file - the package parameter indicating which file
+should be used for standard star calibration data and the atmospheric
+extinction table (=onedspec$iids.cl.)
+.IP 2.
+addsets.subset - the number of instrument apertures (=2).
+.IP 3.
+bswitch.ids_mode - assume and check for data taken in beam-switched
+quadruple mode (=yes).
+.IP 4.
+coincor.ccmode - coincidence correction model (=iids).
+.IP 5.
+coincor.deadtime - detector deadtime (=1.424e-3 seconds)
+.IP 6.
+dispcor.flex_par - the name of the parameter to be used as the
+guide to removing flexure during the observations (=ut).
+.IP 7.
+dispcor.time_wrap - the zero point to be adopted for the
+flexure parameter if it is a time-like variable having a discontinuity
+at 0/24 hours (=17).
+.IP 8.
+dispcor.idsmode - should data from all instrument apertures be dispersion
+corrected to a uniform wavelength scale? (=yes).
+.IP 9.
+dispcor.cols_out - the number of columns (row length of the spectrum)
+to which the output corrected spectrum should be forced during
+mapping (=1024).
+.IP 10.
+extinct.nr_aps - the number of instrument apertures (=2).
+.IP 11.
+flatfit.order - the order of the fit to be used when fitting to
+the flat field spectra (=6).
+.IP 12.
+flatfit.coincor - apply coincidence correction to the flat field
+spectra during accumulations (=yes).
+.IP 13.
+flatdiv.coincor - apply coincidence correction to all spectra during
+the flat field division process (=yes).
+.IP 14.
+identify.function - the fitting function to be used during the wavelength
+solution process (=chebyshev).
+.IP 15.
+identify.order - the order of the fit to be used during the wavelength
+solution process (=6).
+.RE
+.sp 1
+.SH
+IRS
+.PP
+This script file initializes several hidden parameters in a
+variety of tasks to values appropriate to the IRS instrument.
+These parameters are:
+.RS
+.IP 1.
+onedspec.calib_file - the package parameter indicating which file
+should be used for standard star calibration data and the atmospheric
+extinction table (=onedspec$irs.cl.)
+.IP 2.
+addsets.subset - the number of instrument apertures (=2).
+.IP 3.
+bswitch.ids_mode - assume and check for data taken in beam-switched
+quadruple mode (=yes).
+.IP 4.
+coincor.ccmode - coincidence correction model (=iids).
+.IP 5.
+coincor.deadtime - detector deadtime (=1.424e-3 seconds)
+.IP 6.
+dispcor.flex_par - the name of the parameter to be used as the
+guide to removing flexure during the observations (=ut).
+.IP 7.
+dispcor.time_wrap - the zero point to be adopted for the
+flexure parameter if it is a time-like variable having a discontinuity
+at 0/24 hours (=17).
+.IP 8.
+dispcor.idsmode - should data from all instrument apertures be dispersion
+corrected to a uniform wavelength scale? (=yes).
+.IP 9.
+dispcor.cols_out - the number of columns (row length of the spectrum)
+to which the output corrected spectrum should be forced during
+mapping (=1024).
+.IP 10.
+extinct.nr_aps - the number of instrument apertures (=2).
+.IP 11.
+flatfit.order - the order of the fit to be used when fitting to
+the flat field spectra. IRS users have frequently found that
+any curvature in the fit introduces wiggles in the resulting
+calibrations and a straight divide by the flat normalized to the
+mean works best (=1).
+.IP 12.
+flatfit.coincor - apply coincidence correction to the flat field
+spectra during accumulations (=no).
+.IP 13.
+flatdiv.coincor - apply coincidence correction to all spectra during
+the flat field division process (=no).
+.IP 14.
+identify.function - the fitting function to be used during the wavelength
+solution process (=chebyshev).
+.IP 15.
+identify.order - the order of the fit to be used during the wavelength
+solution process. The IRS has strong deviations from linearity
+in the dispersion and a fairly high order is required to correct
+the dispersion solution (=8).
+.RE
+.sp 1
+.SH
+ONEDUTIL
+.PP
+This is a group of utility operators for the ONEDSPEC package. They
+are documented separately after the ONEDSPEC operators. ONEDUTIL
+is a "pseudo-package" - it acts like a package under ONEDSPEC, but
+many of its logical tasks are physically a part of ONEDSPEC. This
+is done to minimize disk storage requirements, and to logically
+separate some of the functions from the main ONEDSPEC menu which
+was getting too large to visually handle.
+.sp 1
+.SH
+PROCESS
+.PP
+This task generally does not exist until the user executes the
+script task BATCHRED which creates PROCESS.CL, a secondary script
+file containing a CL command stream to batch process spectra.
+The task is defined so that the CL is aware of its potential
+existence. It is not declared as a hidden task so that the
+user is also aware of its existence and may execute PROCESS
+in the foreground or background.
+.sp 1
+.SH
+REIDENTIFY
+.PP
+This task (written b Frank Valdes) is intended to be used after
+IDENTIFY has been executed. Once a wavelength solution has been
+found for one comparison spectrum, it may be used as a starting point
+for subsequent spectra having similar wavelength characteristics.
+REIDENTIFY provides a batch-like means of performing wavelength solutions
+for many spectra. The output solution is directed to a database text file
+used by DISPCOR.
+.sp 1
+.SH
+SENSFUNC
+.PP
+This task solves for the system sensitivity function across
+the wavelength region of the spectra by comparison of observations
+of standard stars to their (assumed) known energy distribution.
+Each instrument aperture is treated completely independently
+with one exception discussed later. SENSFUNC is probably the
+largest task in the ONEDSPEC package due to heavy use of
+interactive graphics which represents more than half of the
+actual coding.
+.PP
+Input to SENFUNC is the "std" text file produced by STANDARD
+containing the ratio of the count rate adjusted for atmospheric extinction
+to the flux of the star in ergs/cm2/s/Angstrom. Both the count rates and
+fluxes are the average values in the pre-defined bandpasses tabulated
+in the calibration file (indicated by the parameter onedspec.calib_file).
+.PP
+Each entry is the "std" file may have an independent set of wavelength sampling
+points. After all entries have been loaded, a table containing all sampled
+wavelengths is built (a "composite" wavelength table) and all sensitivity
+values are reinterpolated onto this sampling grid. This allows the inclusion
+of standards in which the observational samples are not uniform.
+.PP
+When multiple measurements are available, one of two corrections may
+be applied to the data to account for either clouds or an additive extinction
+term. The effect of clouds is assumed to be grey. Each contributing
+observation is compared to the one producing the highest count rate ratio
+at each wavelength sample. The deviation averaged over all wavelengths
+for a given observation is derived and added back to
+each wavelength sample for that observation. This produces a shift
+(in magnitudes) which, on the average across the spectrum, accounts
+for an extinction due to clouds. This process is called "fudge"
+primarily for historical reasons (from the IPPS, R.I.P.) and also
+because there is questionable justification to apply this correction.
+One reason is so that one can better assess the errors
+in the data after a zero-point correction has been made.
+Another is that the sensitivity function is that closest to a cloud-free
+sky so that claibrations may approach a true flux system if one
+standard was observed during relatively clear conditions.
+Alsom there are claims that the "color solution" is improved by "fudging", but
+I admit that I don't fully understand this argument.
+.PP
+[Perhaps it goes as follows:
+Although a grey scale correction is applied to each observation,
+a color term is introduced in the overall solution. Consider the
+case where 5 magnitudes of cloud extinction obscure one standard
+relative to another. This star generates a sensitivity curve which
+is a factor of 100 smaller. When averaged with the other curve,
+any variations are lost, and the net curve will be
+very similar to the first curve divided by 2. Now apply a "fudge"
+of 5 magnitudes to the second curve. On the average, both curves have
+similar amplitudes, so variations in the second now influence the
+average. The net curve then has color dependent variations not
+in the "un-fudged" net curve. If we assume that the variations in
+the individual observations are not systematic, then "fudge" will
+improve the net color solution. Amazing, isn't it?
+End of hypothesis.]
+.PP
+The second form of correction is much more justifiable. In ONEDSPEC
+it is referred to as a "grey shift" and accounts for possible
+changes in the standard atmospheric extinction model due to
+a constant offset. SENSFUNC will optionally solve for this constant
+provided the observations sample a range of airmass values.
+The constant is computed in terms of magnitudes per airmass, so
+if the airmass range is small, then a large error is likely.
+To solve for this value, a list of pairs of delta magnitude (from the
+observation having the greatest sensitivity) as a function of
+delta airmass (relative to the same observation) is generated
+for all observations. The list is fit using a least squares solution
+of the form:
+.sp 1
+.DS L
+ delta_mag = delta_airmass * grey_shift
+.DE
+.sp 1
+Note that this is a restricted least-squares in the sense that there
+is no zero-point term. The standard curve fit package in IRAF
+does not support this option and the code to perform this is included
+in SENSFUNC.
+.PP
+Because the atmosphere is likely to be the same one for observations
+with each instrument aperture, it is not appropriate to limit
+the least-squares solution to the individual apertures, but rather
+to combine all the data to improve the solution. This would mean
+that the user could not view the effects of applying the grey term
+until all apertures had been analyzed. So, although each aperture is
+solved independently to derive a preliminary value, a final value is
+computed at the end when all data have been reviewed. This is the
+one exception to the independent aperture equals independent
+instrument philosophy.
+.PP
+When "fudging" is applied, the sensitivity function that is generated
+is altered to account for the shifts to the observations. But when
+the "grey shift" is computed, it cannot be directly applied to
+the sensitivity function because it must be modified by the
+observing airmass for each individual object. So the grey shift
+constant is written into the image headers of the generated
+sensitivity functions (which are IRAF images), and also placed
+into the task parameter "add_const" to be used later by BSWITCH.
+.PP
+SENSFUNC can be run in an interactive mode to allow editing
+of the sensitivity data. There are two phases of interaction:
+(1) a review of the individual observations in which every data
+element can be considered and edited, and (2) a review of the
+composite sensitivity table and the calculated fit to the table.
+In the interactive mode, both phases are executed for every instrument
+aperture.
+.PP
+At both phases of the interactive modes there will be a plot of the
+error in the input values for each wavelength. This is an RMS
+error. [The IPPS plotted standard error which is always a smaller number
+and represents the error in the mean; the RMS represents the error
+in the sample. I'm not sure which is better to use, but RMS is easier
+to understand. RMS is the same as the standard deviation.]
+During phase one, the rms is computed as the standard deviation of
+the sensitivity in magnitudes; but during phase two, it is computed
+as the standard deviation in raw numbers
+and then converted to a magnitude equivalent. The latter is more
+correct but both converge for small errors.
+.PP
+There is one option in SENSFUNC which has never been tried and it won't
+work - the option to enter a predefined table of sensitivities as
+a function of wavelength as a simple text file. This option may
+be useful a some time and should probably be fixed. I think the
+only problem with it is a lack of consistency in the units.
+.PP
+An additional option has been requested but it is not clear that it
+is a high priority item - the ability to compute the extinction
+function. There may be instances when the mean extinction table
+is not appropriate, or is not known. If sufficient data are
+available (many observations of high precision over a range of airmasses
+during a photometric night), then the extinction function is
+calculable. Presently SENSFUNC can only compute a constant offset to
+the extinction function, but the same algorithm used may be applied
+at each wavelength for which observations are made to compute a
+correction to an adopted extinction function (which may be zero),
+and the correction can then be written out to the revised extinction
+table file. This file will then be read by BSWITCH during the
+extinction correction process.
+So at each wavelength, pairs of delta magnitude as a function of
+delta airmass are tabulated and fit as above:
+.sp 1
+.DS L
+ delta_mag[lambda] = delta_airmass * delta_extinction[lambda]
+.DE
+.sp 1
+Because the data have been heavily subdivided into wavelength bins,
+there are only a few measurements available for solving this
+least-squares problem and the uncertainties are large unless many
+observations have been taken. Experience has shown that at least
+7-8 measurements are needed to come close, and 15 measurements are
+about the minimum to get a good solution. Unless the data are of
+high quality, the uncertainty in the solution is comparable to
+the error in assuming a constant offset to the mean extinction function.
+Nevertheless, the option should be installed at some time since
+some observers do obtain the necessary data.
+.sp 1
+.SH
+SLIST
+.PP
+The spectrum specific header elements are listed in either a short
+or long form. See the discussion on headers (section 3) for an explanation
+of the terms. Values for airmass are printed if present in the header;
+otherwise, the value is given as the string "?????" to indicate no
+value present (even if one can be calculated from the telescope
+pointing information elsewhere in the header).
+.PP
+The short form header lists only the image name, whether it is
+an object or sky observation, the spectrum length, and the title.
+.sp 1
+.SH
+SPLOT
+.PP
+This is probably the second largest task in the ONEDSPEC package. It continues
+to grow as users provide suggestions for enhancement, although
+the growth rate appears to be slowing. SPLOT is an interactive
+plot program with spectroscopy in mind, although it can be used
+to plot two dimensional images as well.
+.PP
+SPLOT should still be considered a prototype - many of the algortihms
+used in the analysis functions are crude, provided as interim
+software to get results from the data until a more elaborate package
+is written. It would probably be best to create an analysis specific
+package - SPLOT is reasonably general, and to enhance it further
+would complicate the keystroke sequences.
+.PP
+Ideally it should be possible to do anything to a spectrum with
+a single keystroke. In reality, several keystrokes are required.
+And after 15 or 20 functions have been installed, the keystroke
+nomenclature becomes obscure - all the best keys are used up, and
+you have to resort to things like '(' which is rather less
+mneumonic than a letter. So some of the functionality in SPLOT
+has been assigned to the "function" submenu invoked by 'f' and
+exited by 'q' keystrokes. These include the arithmetic operators:
+add, multiply by a constant, add, subtract, multiply, divide by
+a spectrum, and logarithms, square root, inverse, and absolute
+value of a spectrum.
+.PP
+Some of the analysis functions include: equivalent width, line centers,
+flux integration under a line, smoothing, spectrum flattening,
+and deblending of lines.
+.PP
+The deblender has serious limitations but handles about half the
+cases that IIDS/IRS users are interested in. It fits only
+Gaussian models to the blends, and only a single width parameter.
+The fit is a non-linear least-squares problem, so starting values
+present some difficulties. All starting values are initialized to 1.0 -
+this includes the width, relative strengths of the lines, and deviation
+from intial marked centers. The iterative solution usually converges
+for high signal-to-noise data, but may go astray, resulting in
+a numerical abort for noisy data. If this occurs, it is often
+possible to find a solution by fitting to a single strong line
+to force a better approximation to the starting values, and then refit
+the blend of interest.
+.PP
+The non-linear least-squares routine is one obtained from an industrial
+source. The code is very poorly written and in FORTRAN. No one should
+attempt to understand it. The basic algorithm is an unconstrained simplex
+minization search combined with a parabolic linear least-squares approximation
+when in the region of a local minimum.
+A test was made comparing this to the algorithm in Bevington, and the
+Bevington algorithm appeared less likely to converge on noisy data.
+Only one test case was used, so this is hardly a fair benchmark.
+.PP
+The problem with non-convergence is that a floating point error is
+almost surely to arise. This is usually a floating point over/under
+flow while computing an exponential (as required for a Gaussian).
+In UNIX, there is apparently no easy way to discriminate from
+FORTRAN which floating point exception has occurred, and so there
+is no easy way to execute a fix up and continue. This is most
+unfortunate because the nature of these non-linear techniques is
+that given a chance, they will often recover from searching
+down the wrong alley. A VMS version of the same routines seems to
+survive the worst data because the error recovery is handled
+somewhat better. [The VMS version also seems to run much faster,
+presumably because the floating point library support is better
+optimized.]
+.PP
+The net result of all this, is that a weird undocumented subroutine
+is used which provides no error estimate. The Bevington routines
+do provide an error estimate which is why I wanted to use them.
+[In fact, there is no way to exactly compute the errors in the
+fit of a non-linear least-squares fit. One can however apply
+an approximation theory which assumes the hypersurface can be
+treated locally as a linear function.]
+.PP
+There are several methods for computing equivalent widths in SPLOT.
+The first method for measuring equivalent width is simply to integrate the
+flux above/under a user defined continuum level. Partial pixels
+are considered at the marked endpoints. A correction for the pixel size,
+in Angstroms, is applied because the units of equivalent width are Angstroms.
+You will probably get a different answer when doing equivalent
+width measurements in channel mode ('$' keystroke) as compared to
+wavelength mode ('p').
+.PP
+Centering is performed as a weighted first moment of the region:
+.sp 1
+.DS L
+ int1 = integral [ (I-Ic) * sqrt (I-Ic) * w]
+ int2 = integral [ (I-Ic) * sqrt (I-Ic) ]
+ xc = int1 / int2
+.DE
+.sp 1
+where I is the intensity at the pixel at wavelength w, and Ic is
+the estimated continuum intensity. The square root term provides
+the weighting assuming photon statistics [sigma = sqrt(I)], and xc
+is the derived center of the region.
+.PP
+An alternative method for equivalent widths was supplied by Caty
+Pilachowski and is described in some detail in the help file for
+SPLOT. This method is fast and insensitive to cursor settings, so
+the user can really zip through a spectrum quickly.
+.PP
+Smoothing is performed using a simple boxcar smooth of user specified
+size (in pixels). To handle edge effects, the boxcar size is
+dynamically reduced as the edge is approached, thereby reducing
+the smoothing size in those regions.
+.PP
+The flattening operator is a preliminary one, written before the
+curve fitting package was available in IRAF. This operator
+should probably be re-written to include the interactive
+style used in FLATFIT. Currently the flattening is done
+using classic polynomial least-squares with pixel rejection
+chosen to preferentially reject absorption lines and strong
+emission lines. The rejection process is repeated through
+a number of iterations specifiable as a hidden parameter to SPLOT.
+This is poorly done - the order of the fit and the number of
+iterations should be controllable while in SPLOT. However,
+experimentation has shown that for a given series of spectra,
+the combination of rejection criteria, order, and iteration count
+which works well on one spectrum will generally work well
+on the other spectra. Note that the flatten operator attempts to
+find a continuum level and normalize to that continuum, not to the
+average value of the spectrum.
+.PP
+There are also the usual host of support operators - expansion,
+overplotting, and so forth. There is also a pixel modifer mode
+which connects two cursor positions. This forces a replot of the entire
+spectrum after each pair of points has been entered. This should
+probably be changed to inhibit auto-replot.
+.PP
+Some users have requested that all two cursor operators allow
+an option to escape from the second setting in case the wrong
+key was typed. I think this is a good idea, and might be implemented
+using the "esc" key (although I could not seem to get this keystroke
+through the GIO interface).
+.PP
+Another user request is the option to overplot many spectra with
+autoscaling operational on the entire range. This is also a good
+idea. Yet another improvement could be made by allowing the user
+to specify the x and y range of the plot, rather than autoscaling.
+.PP
+There is one serious problem with respect to plotting spectra
+corrected to a logarithmic wavelength scale. It would be nice to
+plot these spectra using the logarithmic axis option, but this
+option in GIO requires that at least one entire decade of x axis
+be plotted. So for optical data, the x axis runs from 1000 Angstroms
+to 10,000 Angstroms. Imagine a high dispersion plot having only 100
+Angstroms of coverage - the plot will look like a delta function!
+The current version of SPLOT uses a linear axis but plots in
+the log10 of wavelength. Not very good, is it.
+.sp 1
+.SH
+STANDARD
+.PP
+This task computes the sensitivity factor of the instrument
+at each wavelength for which an a priori measured flux value is known
+and within the wavelength range of the observations.
+Sensitivity is defined as
+[average counts/sec/Angstrom]/[average ergs/cm2/sec/Angstrom]
+over the specified bandpass for which the star has been measured.
+Both numerator and denominator refer to quantities above the
+Earth's atmosphere and so the count rates must be corrected for
+extinction.
+The wavelengths of known measurements, the bandpasses, the
+fluxes (in magnitudes), and the mean extinction table
+are read from a calibration file whose name is specified
+by the calib_file parameter (see LCALIB for a description of this
+file). If a magnitude is exactly 0.0, it is assumed
+that no magnitude is known for this star at the wavelength
+having a 0.0 magnitude. This allows entries having incomplete
+information.
+.PP
+As each observation is read, it is added into an accumulator for
+its aperture. Or subtracted if it is a sky measurement. After
+a pair of object and sky observations have been added, the
+difference is corrected for extinction (as in BSWITCH), converted
+to counts per second, and integrations performed over the bandpasses
+for which flux measures are known. The bandpasses must be completely
+contained within the spectrum - partial coverage of a bandpass
+disqualifies it from consideration. The integrations are compared
+with the known flux values and the ratio is written to a text
+file (the "std" file) along with the wavelength of the measurement
+and the total counts in the bandpass. The total counts value may
+be used by SENSFUNC for weighting the measurements during averaging.
+.PP
+Many users are surprised by the order of the spectral names
+printed out as STANDARD executes since the order is not necessarily
+ascending through the spectrum list. This is because the name
+printed is the name of the object spectrum most recently associated
+with an object-sky pair. So if a sky pair is several spectra down the
+list, an intervening object-sky pair taken through a different
+instrument aperture may be processed in the meantime.
+For example, say spectra 1-8 are taken so that object spectra
+numbers 1 and 7 and sky spectra 3 and 5 are taken through aperture 0,
+object spectra 4 and 6 and sky spectra 2 and 8 are taken through
+aperture 1. [This is a very common pattern for IIDS/IRS users.]
+Then spectrum 1 and 3 will pair up and be processed first (spectrum
+name 1 will be printed). Then 4 and 2 (name 4 printed), then
+7 and 5 (name 7 printed), and then 6 and 8 (name 6 printed).
+So the order of names printed will be 1,4,7,6. Simple, isn't it?
+.PP
+If the input spectra are not taken in a beam-switched mode
+then the parameter "beam_switch" should be set to no.
+Then no sky subtraction will be attempted.
+.PP
+The user may enter sensitivity values directly into a file and use
+it as the "std" file for a correction.
+See the help file for STANDARD for a description of the entries in
+the file, and see a typical file.
+.PP
+STANDARD offers a limited interactive mode. The first sky subtracted
+spectrum is displayed and the bandpasses at which sensitivity
+measurements are made will be shown as boxes. This provides a means
+to see where the measurements are falling on the observational
+data and to assess whether a bandpass may be including some
+absorption edge which may be affecting the measurement. While it
+is true that the wavelengths of the reference measurements should
+fall in the same place, the effects of instrument resolution and
+inaccuracies in the wavelength calibration may shift the positions
+of the apparent bandpasses. The samples may then be biased.
+.PP
+The second purpose of the interactive mode is to allow the user
+to artificially create new bandpasses on the fly. By placing the
+cursor to bound a new wavelength region, STANDARD will interpolate
+in the magnitude table of the reference star to estimate the magnitude
+of the star at the bounded wavelength. The sensitivity will be calculated
+at that wavelength just as if the bandpass had come from the calibration
+file. This option should be exercised with care. Obviously, points
+should not be generated between reference wavelengths falling on
+strong absorption lines, or on a line either. This option is most useful
+when at a high dispersion and few samples happen to fall in the
+limited wavelength region. Sufficient space is allocated for 10
+artificial samples to be inserted. Once the artificial bandpasses
+have been designated, they are applied to the entire sequence of
+spectra for the current invocation of STANDARD. Once STANDARD
+completes, the added bandpasses are forgotten. This prevents
+an accidental usage of newly created bandpasses on stars of different
+spectral types where a bandpass may fall in a region of continuum
+for one star, but on an absorption line in another.
+.sp 1
+.SH
+SUBSETS
+.PP
+This is a simple task to subtract the second spectrum from the
+first in a series of spectra. So if spectra 1-10 are input,
+5 new spectra will be created from 1 minus 2, 3 minus 4, and so on.
+This is a straight subtraction, pixel for pixel, with no
+compensation for exposure time differences.
+The header from the first spectrum of the pair is applied to the
+output spectrum.
+.sp 1
+.SH
+The ONEDUTIL tasks
+.PP
+These utility tasks are logically separated from the ONEDSPEC
+package.
+.sp 1
+.SH
+COEFS
+.PP
+This task reads the header parameters contained in comparison arc spectra
+describing the wavelength solution generated by the mountain reduction
+program and re-writes the solution parameters into a database
+text file for use by DISPCOR. Otherwise those solutions would be
+lost. COEFS assumes that the coefficients represent a Legendre
+polynomial which is what the mountain reduction programs use.
+.sp 1
+.SH
+COMBINE
+.PP
+When an object has been observed over a wide range of wavelength
+coverage by using more than one instrumental setup (such as
+a blue and a red setting) or with different instruments (such
+as IUE and the IRS), it is often desirable to combine the
+spectra into a single spectrum. COMBINE will rebin a group of
+spectra to new spectra having a single dispersion and average the
+new spectra to create a single long spectrum.
+If there are gaps in the composite spectrum, zeroes are used
+as fillers. Ideally those pixels which have no known value
+should be considered blank pixels. IRAF does not currently
+support blank pixels, so zeroes are used for now. [One
+might suggest using INDEF, but then all other routines will
+have to check for this value.]
+A side effect of choosing 0.0 is that during the averaging
+of overlapping spectra, a true 0.0 will be ignored by COMBINE.
+The basic rebinning algorithms used in DISPCOR are used in COMBINE
+(and also REBIN).
+.PP
+The averaging can be weighted by exposure time, or by user assigned weights.
+It would be better if each spectrum had an associated vector of
+weights (one weight at each wavelength) so that the weighted averaging
+could be done on a pixel basis. This is very expensive in terms
+of both storage and file access overhead since each spectrum would
+require twice the storage and number of files.
+[Actually weights could be small 4 bit integers and take up very little space.]
+.PP
+A less ideal alternative would be to place a small number
+(about 16) of weight parameters
+in the header file which represent the approximate weights of that many
+regions of the spectrum, and then one could interpolate in these parameters
+for a weight appropriate to the pixel of interest.
+.PP
+A third solution (and even less ideal)
+is to place a single parameter in the header which
+represents an average weight of the entire spectrum. For the latter two cases,
+the header weights could be derived from the average counts per
+wavelength region - the region being the entire spectrum in the last case.
+The weights must be entered into the header during the BSWITCH operation
+since that is the last time that true counts are seen. [An implicit
+assumption is that counts are proportional to photons. If data from
+two different instruments are to be averaged, then the weights should be
+expressed in photons because the ratio of counts to photons is highly
+instrument dependent.]
+.PP
+COMBINE suffers from a partial pixel problem at the end points.
+Interpolation at the ends can lead to an underestimate of the flux
+in the last pixels because the final pixel is not filled. When averaging
+in data from another spectrum or instrument, these pixels show up
+as sharp drops in the spectrum. The problem appears due to the
+rebinning algorithm and should be corrected someday (also in DISPCOR
+and REBIN).
+.sp 1
+.SH
+LCALIB
+.PP
+This utility provides a means of checking the calibration files
+containing the standard star fluxes and extinction table.
+Any of the entries in the file may be listed out - the bandpasses,
+extinction, standard star names, standard star fluxes in either
+magnitudes, flambda, or fnu. For a description of the calibration
+file format, see the help documentation for LCALIB.
+.PP
+The primary uses for LCALIB are to verify that new entries in
+the tables are correct, to generate a list of standard star names
+in a calibration file, and to produce a table of fluxes for a given standard
+star. The table may then be used to generate a spectrum over a specified
+wavelength region using SINTERP and overplotted with observational
+data to check the accuracy of the reductions.
+.sp 1
+.SH
+MKSPEC
+.PP
+MKSPEC provides a way to generate a limited set of artificial
+spectra. Noise generation is not available. The current options
+are to generate a spectrum which is either a constant, a ramp,
+or a black body. The spectrum may be two dimensional, but
+all image lines will be the same.
+.sp 1
+.SH
+NAMES
+.PP
+This is the simplest task in the ONEDSPEC package. It
+generates the image file names which are implied by a
+root name and record string. The primary use for this
+task is to generate a list of image names to be used
+as input for some other program such as WFITS.
+The output from NAMES can be redirected to file
+and that file used with the "@file" notation for image
+name input. An optional parameter allows an additional
+string to be appended to the generated file name
+to allow a subraster specification.
+.sp 1
+.SH
+REBIN
+.PP
+Spectra are rebinned to the wavelength parameters specified
+by either matching to a reference spectrum or by user input.
+The algorithms are those used by DISPCOR and the same options
+for the interpolation method are available. REBIN is useful
+when data are obtained with different instruments or setups
+producing roughly comparable wavelength ranges and possibly
+different dispersions, and the data are to be compared.
+REBIN may also be used as a shift operator by specifying a
+new starting wavelength. Or it may be used as a smoothing operator
+by specifying a course dispersion. It may also be used
+to convert between the two formats - linear in wavelength and
+linear in the logarithm of wavelength. This latter option has
+not been thoroughly exercised - proceed with caution.
+.sp 1
+.SH
+RIDSMTN
+.PP
+This task was stolen from the DATAIO package to make the following
+modification: IIDS and IRS data are both written as 1024 pixel
+spectra at the mountain. But the detectors do not produce a full
+1024 pixels of acceptable data. In fact the IRS only has 936 pixels.
+The data are written this way to conform to the IIDS ideal spectrum
+which does have 1024 pixels, but the first few (about 6) are not usable.
+To signal the good pixels, the IIDS/IRS header words NP1 and NP2 are
+set to the beginning and ending good pixels. Actually NP1 points to
+the first good pixel minus one. [Really actually NP1 and NP2 may be reversed,
+but one is big and the other small so you can tell them apart.]
+.PP
+The version of RIDSMTN in ONEDUTIL keys off these parameters and writes
+images containing only good pixels which means that the images will be
+smaller than 1024 pixels. The user has the option of overriding the
+header values with the task parameters "np1" and "np2". These may be
+specified as 1 and 1024 to capture the entire set of pixels written to
+tape or any other subset. Beware that np1 and np2 as task parameters
+refer to the starting pixel and ending pixel respectively. None of this
+nonsense about possible role reversals or "first good minus one" is
+perpetuated.
+.sp 1
+.SH
+SINTERP
+.PP
+I think this is a handy little program. It provides a way to make
+an IRAF spectral image from a table of values in a text file.
+The table is interpolated out to any length and at any sampling
+rate. A user can create a table of corrections to be applied to
+a set of spectra, for example, use SINTERP to build a spectrum,
+and run CALIBRATE to multiply a group of spectra by the correction.
+.PP
+The original raison d'etre for SINTERP was to create spectra of
+standard stars from the listing of fluxes generated by LCALIB.
+Using SPLOT the created spectrum can be overplotted with calibrated
+observations to compare the true tabulated fluxes with the observed
+fluxes.
+.PP
+SINTERP grew out of the task INTERP in the UTILITIES package
+and works pretty much the same way. One major change is that
+the table containing the x-y pairs is now stored in a dynamically
+allocated array and can be as large as the user requests. The
+default size is 1024 pairs, but the parameter tbl_size can
+be set to a larger value. This then allows one to create a spectrum
+from its tabulated values of wavelength and flux even if the
+the table is several thousand elements long.
+Note that the option to route the output from INTERP to
+STDOUT has been retained if a new table is to be generated rather
+than an IRAF image.
+.PP
+Another major change from INTERP is the use of the IRAF curve fitting
+routines as an option. These were not originally available.
+The choices now include linear or curvey interpolators, Legendre
+or Chebyshev polynomial fits, and cubic or linear splines.
+.sp 1
+.SH
+WIDSTAPE
+.PP
+This task has vague origins in the DATAIO task WIDSOUT which writes
+a tape having the format of the IDSOUT package which ran on the
+CYBER (R.I.P.). For convenience to users this format has been
+maintained for spectra having lengths up to 1024 pixels.
+The version in DATAIO requires that the user enter all the header
+parameters as task parameters. For several hundred spectra, this
+approach is unwieldy. Because the ONEDSPEC package uses the header
+parameters heavily, it is able to read them directly and write the
+values to the tape file without user intervention.
+.PP
+The output tape (or diskfile) may be in either ASCII or EBCDIC format.
+Spectra shorter than 1024 are zero filled. Each invocation of
+the task write a new tape file followed by a tape mark (EOF).
+.LP
+.SH
+3. Image Header Parameters
+.PP
+The ONEDSPEC package uses the extended image header to extract
+information required to direct processing of spectra. If the
+header information were to be ignored, the user would need to
+enter observing parameters to the program at the risk of
+typographical errors, and with the burden of supplying the
+data. For more than a few spectra this is a tedious job,
+and the image header information provides the means to eliminate
+almost all the effort and streamline the processing.
+.PP
+However, this requires that the header information be present,
+correct, and in a recognizable format. To meet the goal of
+providing a functional package in May 1985, the first iteration
+of the header format was to simply adopt the IIDS/IRS headers.
+This allowed for processing of the data which would be first
+used heavily on the system, but would need to be augmented at
+a later date. The header elements may be present in any order,
+but must be in a FITS-like format and have the following names
+and formats for the value fields:
+.sp 1
+.TS
+l c l
+l l l.
+Parameter Value Type Definition
+
+HA SX Hour angle (+ for west, - for east)
+RA SX Right Ascension
+DEC SX Declination
+UT SX Universal time
+ST SX Sidereal time
+AIRMASS R Observing airmass (effective)
+W0 R Wavelength at center of pixel 1
+WPC R Pixel-to-pixel wavelength difference
+NP1 I Index to first pixel containing good data (actually first-1)
+NP2 I Index to last pixel containing good data (last really)
+EXPOSURE I Exposure time in seconds (ITIME is an accepted alias)
+BEAM-NUM I Instrument aperture used for this data (0-49)
+SMODE I Number of apertures in instrument minus one (IIDS only)
+OFLAG I Object or sky flag (0=sky, 1=object)
+DF-FLAG I Dispersion fit made on this spectrum (I=nr coefs in fit)
+SM-FLAG I Smoothing operation performed on this spectrum (I=box size)
+QF-FLAG I Flat field fit performed on this spectrum (0=yes)
+DC-FLAG I Spectrum has been dispersion corrected (0=linear, 1=logarithmic)
+QD-FLAG I Spectrum has been flat fielded (0=yes)
+EX-FLAG I Spectrum has been extinction corrected (0=yes)
+BS-FLAG I Spectrum is derived from a beam-switch operation (0=yes)
+CA-FLAG I Spectrum has been calibrated to a flux scale (0=yes)
+CO-FLAG I Spectrum has been coincidence corrected (0=yes)
+DF1 I If DF-FLAG is set, then coefficients DF1-DFn (n <= 25) exist
+.TE
+.PP
+The values for the parameters follow the guidelines adopted for
+FITS format tapes. All keywords occupy 8 columns and contain
+trailing blanks. Column 9 is an "=" followed by a space. The value field
+begins in column 11. Comments to the parameter may follow a "/" after
+the value field. The value type code is as follows:
+.RS
+.IP SX
+This is a sexigesimal string of the form '12:34:56 ' where the first
+quote appears in column 11 and the last in column 30.
+.IP R
+This is a floating point ("real") value beginning in column 11 and
+extending to column 30 with leading blanks.
+.IP I
+This is an integer value beginning in column 11 and extending to
+column 30 with leading blanks.
+.RE
+.sp 1
+.PP
+The parameters having FLAG designations all default to -1 to indicate
+that an operation has not been performed.
+The ONEDSPEC subroutines "load_ids_hdr" and "store_keywords" follow
+these rules when reading and writing spectral header fields.
+If not present in a header, load_ids_hdr will assume a value of zero
+except that all flags are set to -1, and the object flag parameter
+defaults to object.
+.PP
+When writing an image, only the above parameters are stored by store_keywords.
+Other header information is lost. This needs to be improved.
+.PP
+Not all programs need all the header elements. The following table
+indicates who needs what. Tasks not listed generally do not require
+any header information. Header elements not listed are not used.
+The task SLIST requires all the elements listed above.
+The task WIDTAPE requires almost all (except NP1 and NP2).
+The headings are abbreviated task names as follows:
+.sp 1
+.nr PS 8
+.ps 8
+.TS
+center;
+l l | l l | l l.
+ADD addsets COI coincor FIT flatfit
+BSW bswitch COM combine REB rebin
+CAL calibrate DIS dispcor SPL splot
+COE coefs FDV flatdiv STA standard
+.TE
+.sp 1
+.TS
+center, tab(/);
+l | l | l | l | l | l | l | l | l | l | l | l | l.
+Key/ADD/BSW/CAL/COE/COI/COM/DIS/FDV/FIT/REB/SPL/STA
+_
+HA// X////////// X/
+RA// X////////// X/
+DEC// X////////// X/
+ST// X////////// X/
+UT// X////////// X/
+AIRMASS// X////////// X/
+W0// X/ X/// X//// X/ X/ X/
+WPC// X/ X/// X//// X/ X/ X/
+NP1/////////// X///
+NP2/////////// X///
+EXPOSURE/ X/ X/// X/ X///// X///
+BEAM-NUM// X/ X//// X/ X/ X// X/ X//
+OFLAG// X////////// X/
+DF-FLAG//// X
+DC-FLAG// X//// X//// X/ X/ X/
+QD-FLAG//////// X/
+EX-FLAG// X/
+BS-FLAG// X/
+CA-FLAG/ X// X//////// X/
+CO-FLAG///// X//
+DFn//// X/
+.TE
+.nr PS 11
+.ps 11
+.bp
+.SH
+Headers From Other Instruments
+.PP
+The header elements listed above are currently created only when reading
+IIDS and IRS data from one of the specific readers: RIDSMTN and RIDSFILE.
+The time-like parameters, (RA, DEC, UT, ST, HA), are created in a
+compatible fashion by RCAMERA and RFITS (when the FITS tape is written
+by the KPNO CCD systems).
+.PP
+For any other header information, the ONEDSPEC package is at a loss
+unless the necessary information is edited into the headers with
+an editing task such as HEDIT. This is not an acceptable long term
+mode of operation, and the following suggestion is one approach to
+the header problem.
+.PP
+A translation table can be created as a text file which outlines
+the mapping of existing header elements to those required by the
+ONEDSPEC package. A mapping line is needed for each parameter
+and may take the form:
+.sp 1
+.RS
+.DC
+1D_param default hdr_param key_start value_start type conversion
+.DE
+.RE
+.sp 1
+where the elements of an entry have the following definitions:
+.sp 1
+.TS
+center, tab( );
+l lw(5i).
+1D_param T{
+The name of the parameter expected by the ONEDSPEC package,
+such as EXPOSURE, OFLAG, BEAM-NUM.
+T}
+
+default T{
+A value to be used if no entry is found for this parameter or if
+no mapping exists.
+T}
+
+hdr_param T{
+The string actually present in the existing image header to be
+associated with the ONEDSPEC parameter.
+T}
+
+key_start T{
+The starting column number at which the string starts
+in the header.
+T}
+
+value_start T{
+The starting column number at which the string describing the
+value of the parameter starts in the header.
+T}
+
+type T{
+The format type of the parameter: integer, real, string, boolean,
+sexigesimal.
+T}
+
+conversion T{
+If the format type is string, a further conversion may
+optionally be made to one of the formats listed under type.
+The conversion may requires some expression evaluation.
+T}
+.TE
+.sp 1
+.PP
+Consider the example where the starting wavelength of a
+spectrum is contained in a FITS-like comment and the object-
+sky flag in a similar fashion:
+.sp 1
+.DS
+ COMMENT = START-WAVE 4102.345 / Starting wavelength
+ COMMENT = OBJECT/SKY 'SKY '/ Object or Sky observation
+.DE
+.sp 1
+The translation file entries for this would be:
+.sp 1
+.DS
+ W0 0.0 START-WAVE 12 24 R
+ OFLAG 0 OBJECT/SKY 12 25 S SKY=0;OBJECT=1
+.DE
+.sp 1
+The first entry is fairly simple. The second requires an expression
+evaluation and second conversion.
+.PP
+A translation file can be built for each instrument and its
+special header format, and the file name can be associated with a
+ONEDSPEC package parameter. The two subroutines in ONEDSPEC dealing
+directly with the headers (load_ids_hdr and store_keywords)
+can be modified or replaced to access this file and
+translate the header elements.
diff --git a/noao/onedspec/doc/sys/onedv210.ms b/noao/onedspec/doc/sys/onedv210.ms
new file mode 100644
index 00000000..431c84f5
--- /dev/null
+++ b/noao/onedspec/doc/sys/onedv210.ms
@@ -0,0 +1,680 @@
+.nr PS 9
+.nr VS 11
+.de LS
+.RT
+.if \\n(1T .sp \\n(PDu
+.ne 1.1
+.if !\\n(IP .nr IP +1
+.if \\n(.$-1 .nr I\\n(IR \\$2n
+.in +\\n(I\\n(IRu
+.ta \\n(I\\n(IRu
+.if \\n(.$ \{\
+.ds HT \&\\$1
+.ti -\\n(I\\n(IRu
+\\*(HT
+.br
+..
+.ND
+.TL
+ONEDSPEC/IMRED Package Revisions Summary: IRAF Version 2.10
+.AU
+Francisco Valdes
+.AI
+IRAF Group - Central Computer Services
+.K2
+P.O. Box 26732, Tucson, Arizona 85726
+May 1992
+.NH
+Introduction
+.LP
+The IRAF NOAO spectroscopy software, except for the \fBlongslit\fR
+package, has undergone major revisions. The revisions to the aperture
+extraction package, \fBapextract\fR, are described in a separate
+document. This paper addresses the revisions in the \fBonedspec\fR
+package and the spectroscopic image reduction packages in the
+\fBimred\fR package. In addition to the revisions summary given here
+there is a new help topic covering general aspects of the new
+\fBonedspec\fR package such as image formats, coordinate systems, and
+units. This help topic is referenced under the name
+"onedspec.package".
+.LP
+There are a large number of revisions both minor and major. To avoid
+obscuring the basic themes and the major revisions in a wealth of minor
+detail, this document is organized into sections of increasing detail. The
+most important aspects of the revisions are described in a major highlight
+section followed by a minor highlight section. Then a reorganization chart
+for the \fBonedspec\fR package is presented showing where various
+tasks have been moved, which have been deleted, and which are new.
+Finally, a summary of the revisions to each task is presented.
+.LP
+I hope that the many new capabilities, particularly as presented in the
+highlight section, will outweigh any disruption in accomodating to so
+many changes.
+.NH
+Major Highlights
+.LP
+The major highlights of the revisions to the NOAO spectroscopy software
+are listed and then discussed below.
+
+.DS
+\(bu Non-linear dispersion calibration
+\(bu Integration of dispersion coordinates with the core system
+\(bu Sinc interpolation
+\(bu Plotting in user selected units including velocity
+\(bu Integration of long slit spectra and 1D formats
+\(bu New \fBimred\fR packages featuring streamlined reductions
+.DE
+
+Possibly the most significant revision is the generalization allowing
+non-linear dispersion calibration. What this means is that spectra do
+not need to be interpolated to a uniform sampling in wavelength or
+logarithmic wavelength. The dispersion functions determined from
+calibration arc lines by \fBidentify\fR, \fBreidentify\fR,
+\fBecidentify\fR, or \fBecreidentify\fR can be simply assigned to the
+spectra and used throughout the package. It is also possible to assign
+a dispersion table or vector giving the wavelengths at some or all of
+the pixels. Note, however, that it is still perfectly acceptible to
+resample spectra to a uniform linear or log-linear dispersion as was
+done previously.
+.LP
+For data which does not require geometric corrections, combining, or
+separate sky subtraction the observed sampling need never be changed
+from the original detector sampling, thus avoiding any concerns over
+interpolation errors. In other cases it is possible to just
+interpolate one spectrum, say a sky spectrum, to the dispersion of
+another spectrum, say an object spectrum, before operating on the two
+spectra. There are several new tasks that perform interpolations to a
+common dispersion, not necessarily linear, when operating on more than
+one spectrum. In particular, the new task \fBsarith\fR and the older
+task \fBsplot\fR now do arithmetic on spectra in wavelength space.
+Thus, one no longer need be concerned about having all spectra
+interpolated to the same sampling before doing arithmetic operations as
+was the case previously.
+.LP
+The trade-off in using non-linear dispersion functions is a more complex
+image header structure. This will make it difficult to import to non-IRAF
+software or to pre-V2.10 IRAF systems. However, one may resample to a
+linear coordinate system in those cases before transfering the spectra as
+FITS images having standard linear coordinate keywords.
+.LP
+On the subject of interpolation, another important addition is the
+implementation of sinc interpolation. This is generally considered
+the best interpolation method for spectra, however, it must be used
+with care as described below.
+Sinc interpolation approximates applying a phase shift to the fourier
+transform of the spectrum. Thus, repeated interpolations do not accumulate
+errors (or nearly so) and, in particular, a forward and reverse
+interpolation will recover the original spectrum much more closely than
+other interpolation methods. However, for undersampled (where the fourier
+transform is no longer completely represented), strong features, such as
+cosmic rays or narrow emission or absorption lines, the ringing can be much
+more severe than the polynomial interpolations. The ringing is especially
+a concern because it extends a long way from the feature causing the
+ringing; 30 pixels with the truncated algorithm that has been added. Note
+that it is not the truncation of the interpolation function which is at
+fault but the undersampling of the narrow features!
+.LP
+Because of the problems seen with sinc interpolation it should be used with
+care. Specifically, if there are no undersampled, narrow features it is a
+good choice but when there are such features the contamination of the
+spectrum by ringing is more severe, corrupting more of the spectrum,
+than with other interpolation types.
+.LP
+The dispersion coordinates are now interfaced through the IRAF WCS
+(world coordinate system) interface. This is important to users for
+two reasons. First, operations performed on spectral images by IRAF
+core system tasks and the IRAF image I/O system will have access to the
+dispersion coordinates and will properly modify them as necessary. The
+most common such operation is applying an image section to a spectrum
+either during an image copy or as input to another task. In this case
+the relation between the pixels in the image section and their
+wavelengths is preserved. For example one may \fBsplot\fR a section of
+a large spectrum and get the correct wavelengths. The second reason is
+to allow use of proper dispersion coordinates in such IRAF tasks as
+\fBlistpixels\fR, \fBimplot\fR, and \fBgraph\fR.
+.LP
+The new package supports a variety of spectral image formats. The
+older formats are understood when reading them. In particular the one
+dimensional "onedspec" and the two dimensional "multispec" format will
+still be acceptable as input. Note that the image naming syntax for
+the "onedspec" format using record number extensions is a separate
+issue and is still provided but only in the \fBimred.iids\fR and
+\fBimred.irs\fR packages. Any new spectra created are either a one
+dimensional format using relatively simple keywords and a two or three
+dimensional format which treats each line of the image as a separate
+spectrum and uses a more complex world coordinate system and keywords.
+For the sake of discussion the two formats are still called "onedspec"
+and "multispec" though they are not equivalent to the earlier formats.
+.LP
+In addition, the one dimensional spectral tasks may also now operate on
+two dimensional images directly. This is done by using the DISPAXIS
+keyword in the image header or a package dispaxis parameter if the
+keyword is missing to define the dispersion axis. In addition there is
+a summing parameter in the packages to allow summing a number of lines
+or columns. If the spectra are wavelength calibrated long slit
+spectra, the product of the \fBlongslit.transform\fR task, the
+wavelength information will also be properly handled. Thus, one may
+use \fBsplot\fR or \fBspecplot\fR for plotting such data without having
+to extract them to another format. If one wants to extract one
+dimensional spectra by summing columns or lines, as opposed to using
+the more complex \fBapextract\fR package, then one can simply use
+\fBscopy\fR (this effectively replaces \fBproto.toonedspec\fR).
+.LP
+The tasks \fBsplot\fR and \fBspecplot\fR allow use of and changes
+between various dispersion units. Spectra may be plotted in units all
+the way from Hertz to Mev. The units may also be inverted to plot in
+wavenumbers, such as inverse centimeters, and the decimal log may be
+applied, to plot something like log wavelength or log frequency. One
+special "unit" which is available is a velocity computed about a
+specified wavelength/frequency. The multiple unit capability was one
+of the last major changes made before the V2.10 release so the complete
+generalization to arbitrary units has not been completed. Dispersion
+calibration and image world coordinate system generally must still be
+done in Angstroms, particularly if flux calibration is to be done. The
+generalization to other units throughout the package is planned for a
+later release.
+.LP
+The last of the changes catagorized as a major highlight is the
+addition of a number of special packages for generic or specific
+types of instruments and data in the \fBimred\fR package. Most of these
+package include a highly streamlined reduction task that combines
+all of the reduction operations into a single task. For example,
+the \fBspectred.doslit\fR task can extract object, standard star, and
+arc spectra from long slit images, apply a consistent dispersion
+function based on only a single interactively performed dispersion
+solution, compute a sensitivity function and end up with flux
+calibrated spectra. Another example, is \fBhydra.dohydra\fR for
+extracting, flatfielding, dispersion calibrating, and sky subtracting
+spectra from the NOAO Hydra multifiber spectrograph. There are user's
+guides for each of these new reduction tasks.
+.NH
+Minor Highlights
+.LP
+There are some further highlights which are also quite important
+but which are secondary to the previous highlights. These are listed
+and discussed below.
+
+.DS
+\(bu Greater use of package parameters
+\(bu An observatory database
+\(bu A more flexible \fBidentify/reidentify\fR
+\(bu Only one \fBdispcor\fR
+\(bu Spatial interpolation of dispersion solutions
+\(bu Deblending of arbitrary number of gaussian components
+\(bu Manipulating spectral formats
+\(bu Improved fitting of the continuum and related features
+\(bu Various new tasks
+.DE
+
+There is an even greater use of package parameters than in the previous
+release. Package parameters are those which are common to many of the
+the tasks in the package and which one usually wants to change in
+one place. The new package parameters are the default observatory for
+the data if the observatory is not identified in the image header
+(discussed further below), the interpolation type used
+when spectra need to be resampled either for dispersion calibration
+or when operating on pairs of spectra with different wavelength
+calibration, and the default dispersion axis and summing parameters
+for long slit and general 2D images (as discussed in the last section).
+You will find these parameters not only in the \fBonedspec\fR package but in
+all the spectroscopic packages in the \fBimred\fR package.
+.LP
+A number of spectroscopic tasks require information about the location
+of the observation. Typically this is the observatory latitude for
+computing air masses if not defined in the header. Radial velocity
+tasks, and possible future tasks, may require additional information
+such as longitude and altitude. The difficulty is that if such
+parameters are specified in parameter files the default may well be
+inappropriate and even if the users set then once, they may forget to
+update them in later reductions of data from a different observatory.
+In other words this approach is prone to error.
+.LP
+To address this concern observatory parameters are now obtained from an
+observatory database keyed by an observatory identifier. If the image data
+contains an observatory keyword, OBSERVAT, the tasks will look up the
+required parameters from the observatory database. Thus, if the images
+contain the observatory identifier, as does data from the NOAO
+observatories, they will always be correctly reduced regardless of the
+setting of any parameters. Of course one has to deal with data from
+observatories which may not include the observatory identifier and may not
+have an entry in the observatory database. There are provisions for sites
+and individual users to define local database files and to set the default
+observatory parameters. This is all discussed in the help for the
+\fBobservatory\fR task.
+.LP
+The dispersion function fitting tasks \fBidentify\fR and
+\fBreidentify\fR have been improved in a number of important ways.
+These tasks now treat the input images as units. So for long slit and
+multispectrum images one can move about the image with a few
+keystrokes, transfer solutions, and so on. When transfering solutions
+between a multispectrum reference image and another multispectrum image
+with the same apertures using \fBreidentify\fR, the features and
+dispersion solutions are transfered aperture by aperture. This avoids
+problems encountered by having to trace successively between apertures
+and having the apertures be in the same order.
+.LP
+On the subject of tracing in \fBreidentify\fR, in some cases it is
+desirable to use the same reference spectrum with all other sampled
+lines or columns in a long slit spectrum or apertures in a
+multispectrum image rather than propagating solutions across the
+image. The latter method is necessary if there is a continuous and
+progress shift in the features. But if this is not the situation then
+the loss of features when tracing can be a problem. In this case the
+alternative of reidentifying against the same starting reference is now
+possible and there will not be the problem of an increasing loss of
+features. On the other hand, the problem of lost features, whether
+tracing or not, can also be addressed using another new feature of
+\fBreidentify\fR, the ability to add features from a line list. For
+both tracing and nontracing reidentifications, another useful new
+feature is automatic iterative rejection of poorly fitting lines in
+determining a new dispersion function noninteractively.
+.LP
+The nontracing reidentifications, the automatic addition of new lines, and
+the iterative rejection of poorly fitting lines in determining a new
+dispersion function are all responses to make the reidentification process
+work better without intervention. However, as a last resort there is also
+a new interactive feature of \fBreidentify\fR. By monitoring the log output of
+the reidentification process one can have a query be made after the
+automatic reidentification and function fitting to allow selectively
+entering the interactive feature identification and dispersion function
+fitting based on the logged output. Thus if a fit has a particularly large
+RMS or a large number of features are not found one can chose to intervene
+in the reidentification process.
+.LP
+Dispersion calibration is now done exclusively by the task
+\fBdispcor\fR regardless of the spectrum format or dispersion solution
+type; i.e. solutions from \fBidentify\fR or \fBecidentify\fR. In addition to
+allowing assignment of non-linear dispersion functions, as described
+earlier, \fBdispcor\fR has other new features. One is that, in
+addition to interpolating dispersion solutions between two calibration
+images (usually weighted by time), it is now possible to interpolate
+zero point shifts spatially when multiple spectra taken simultaneously
+include arc spectra. This is mostly intended for the new generation of
+multifiber spectrographs which include some fibers assigned to an arc
+lamp source. However, it can be used for the classic photographic case
+of calibration spectra on the same plate.
+.LP
+The limitation to four lines on the number of gaussian components which
+can be deblended by the deblending option in \fBsplot\fR has been removed.
+A new feature is that line positions may be input from a line list as
+well as the original cursor marking or terminal input.
+In addition an option to simultaneously determine a linear background
+has been added. As a spinoff of the deblending option a new, noninteractive
+task, called FITPROFS, has been added. This task takes a list of initial
+line positions and sigmas and simultaneously fits gaussians with a
+linear background. One can constrain various combination of parameters
+and output various parameters of the fitting. While it can be used to
+fit an entire spectrum it becomes prohibitively slow beyond a number like
+30. A banded matrix approach is required in that case.
+.LP
+As mentioned earlier there is a new task called \fBscopy\fR for manipulating
+spectra. It allows changing between various formats such as producing
+the separate, simple keyword structure, one dimensional images from multispec
+format images, combining multiple one dimensional spectra into the
+more compact multispec format, and extracting line or column averaged one
+dimensional spectra from two dimensional images. It can also be
+used to select any subset of apertures from a multispec format,
+merge multiple multispec format spectra, and extract regions of spectra
+by wavelength.
+.LP
+The \fBcontinuum\fR task has been revised to allow independent
+continuum fits for each aperture, order, line, or column in images
+containing multiple spectra. Instead of being based on the
+\fBimages.fit1d\fR task it is based on the new task \fBsfit\fR.
+\fBSfit\fR allows fitting the \fBicfit\fR functions to spectra and
+outputing the results in several ways such as the ratio (continuum
+normalization), difference (continuum subtraction), and the actual
+function fit. In addition it allows outputing the input data with
+points found to be deviant by the iterative rejection algorithm of
+\fBicfit\fR replaced by the fitted value. This is similar to
+\fBimages.lineclean\fR. In all cases, this is may be done
+independently and interactively or noninteractively when there are
+multiple spectra in an image.
+.LP
+A number of useful new tasks have already been mentioned:
+\fBfitprofs\fR, \fBsarith\fR, \fBscombine\fR, \fBscopy\fR, and
+\fBsfit\fR. There are two more new tasks of interest. The task \fBdopcor\fR
+applies doppler shifts to spectra. It applies the shift purely to the
+dispersion coordinates by adding a redshift factor which is applied by
+the coordinate system interface. This eliminates reinterpolation and
+preserves both the shift applied and the original observed dispersion
+function (either linear or nonlinear). The task can also modify the
+pixel values for various relativistic and geometric factors. This task
+is primarily useful for shifting spectra at high redshifts to the local
+rest frame. The second new task is called \fBderedden\fR. It applies
+corrections for interstellar reddening given some measure of the
+extinction along the line of site.
+.NH
+ONEDSPEC Package Task Reorganization
+.LP
+The \fBonedspec\fR package dates back to the earliest versions of IRAF. Some of
+its heritage is tied to the reduction of IRS and IIDS spectra. One of
+the revisions made for this release has been to reorganize the various
+tasks and packages. A few tasks have been obsoleted by new tasks or
+the functionality of the new dispersion coordinate system, a number
+of new tasks have been added, and a number of IRS and IIDS specific
+tasks have been moved to the \fBimred\fR packages for those instruments.
+While these packages are organized for those particular instruments they may
+also be used by data having similar characteristics of beam switching,
+coincidence corrections, and the requirement of sequential numeric
+extensions.
+.LP
+The table below provides the road map to the reorganization showing
+tasks which have disappeared, been moved, been replaced, or are new.
+
+.DS
+.TS
+center;
+r l l l r l l.
+V2.9 V2.10 ALTERNATIVE V2.9 V2.10 ALTERNATIVE
+
+addsets irs/iids process irs/iids
+batchred irs/iids rebin scopy/dispcor
+bplot bplot refspectra refspectra
+bswitch irs/iids reidentify reidentify
+calibrate calibrate sapertures
+coincor iids sarith
+combine scombine scombine
+continuum continuum scopy
+ deredden sensfunc sensfunc
+dispcor dispcor setdisp hedit
+ dopcor sextract scopy
+ fitprofs sfit
+flatdiv irs/iids sflip scopy/imcopy [-*,*]
+flatfit irs/iids shedit hedit
+identify identify sinterp sinterp
+lcalib lcalib slist slist
+mkspec mkspec specplot specplot
+names names splot splot
+ ndprep standard standard
+observatory noao subsets irs/iids
+powercor iids sums irs/iids
+.TE
+.DE
+.NH
+IMRED Packages
+.LP
+Many of the \fBonedspec\fR tasks from the previous release have been
+moved to the \fBiids\fR and \fBirs\fR packages, as indicated above,
+since they were applicable only to these and similar instruments.
+.LP
+A number of new specialized spectroscopic instrument reduction packages
+have been added to the \fBimred\fR package. Many of these have been in
+use in somewhat earlier forms in the IRAF external package called
+\fBnewimred\fR. In addition the other spectroscopic package have been
+updated based on the revisions to the \fBonedspec\fR and
+\fBapextract\fR packages. Below is a table showing the changes between
+the two version and describing the purpose of the spectroscopic
+packages. Note that while many of these package are named for and
+specialized for various NOAO instruments these packages may be applied
+fairly straightforwardly to similar instruments from other
+observatories. In addition the same tools for multifiber and slit
+spectra are collected in a generic package called \fBspecred\fR.
+
+.DS
+.TS
+center;
+r l l s
+r l l l.
+V2.9 V2.10 SPECTROSCOPY PACKAGE
+ argus Fiber: CTIO Argus Reductions
+specphot ctioslit Slit: CTIO Slit Instruments
+echelle echelle Fiber Slit: Generic Echelle
+ hydra Fiber: KPNO Hydra (and Nessie) Reductions
+iids iids Scanner: KPNO IIDS Reductions
+irs irs Scanner: KPNO IRS Reductions
+coude kpnocoude Fiber/Slit: KPNO Coude (High Res.) Reductions
+ kpnoslit Slit: KPNO Slit Instruments
+msred specred Fiber/Slit: Generic fiber and slit reductions
+observatory -> noao
+setairmass
+.TE
+.DE
+.LP
+An important feature of most of the spectroscopic packages are specialized
+routines for combining and streamlining the different reduction operations
+for a particular instrument or type of instrument. These tasks are:
+
+.DS
+.TS
+center;
+r r r.
+argus.doargus ctioslit.doslit echelle.doecslit
+echelle.dofoe hydra.dohydra iids.batchred
+irs.batchred kpnocoude.do3fiber kpnocoude.doslit
+kpnoslit.doslit specred.dofibers specred.doslit
+.TE
+.DE
+.NH
+ONEDSPEC Task Revisions in V2.10
+.LS ADDSETS 2
+Moved to the \fBiids/irs\fR packages.
+.LS BATCHRED
+Moved to the \fBiids/irs\fR packages.
+.LS BPLOT
+The APERTURES and BAND parameters been added to select
+apertures from multiple spectra and long slit images, and bands
+from 3D images. Since the task is a script calling \fBsplot\fR, the
+many revisions to that task also apply. The version in the
+\fBiids/irs\fR packages selects spectra using the record number
+extension syntax.
+.LS BSWITCH
+Moved to the \fBiids/irs\fR packages.
+.LS CALIBRATE
+This task was revised to operate on nonlinear dispersion
+corrected spectra and 3D images (the \fBapextract\fR "extras"). The
+aperture selection parameter was eliminated (since the header
+structure does not allow mixing calibrated and uncalibrated
+spectra) and the latitude parameter was replaced by the
+observatory parameter. The observatory mechanism insures that
+if the observatory latitude is needed for computing an airmass
+and the observatory is specified in the image header the
+correct calibration will be applied. The record format syntax
+is available in the \fBiids/irs\fR packages. The output spectra are
+coerced to have real pixel datatype.
+.LS COINCOR
+Moved to the \fBiids\fR package.
+.LS COMBINE
+Replaced by \fBscombine\fR.
+.LS CONTINUUM
+This task was changed from a script based on \fBimages.fit1d\fR to a
+script based on \fBsfit\fR. This provides for individual independent
+continuum fitting in multiple spectra images and for additional
+flexibility and record keeping. The parameters have been
+largely changed.
+.LS DEREDDEN
+This task is new.
+.LS DISPCOR
+This is a new version with many differences. It replaces the
+previous three tasks \fBdispcor\fR, \fBecdispcor\fR and \fBmsdispcor\fR. It
+applies both one dimensional and echelle dispersion functions.
+The new parameter LINEARIZE selects whether to interpolate the
+spectra to a uniform linear dispersion (the only option
+available previously) or to assign a nonlinear dispersion
+function to the image without any interpolation. The
+interpolation function parameter has been eliminated and the
+package parameter INTERP is used to select the interpolation
+function. The new interpolation type "sinc" may be used but
+care should be exercised. The new task supports applying a
+secondary zero point shift spectrum to a master dispersion
+function and a spatial interpolation of the shifts when
+calibration spectra are taken at the same time on a different
+region of the same 2D image. The optional wavelength table may
+now also be an image to match dispersion parameters. The
+APERTURES and REBIN parameters have been eliminated. If an
+input spectrum has been previously dispersion corrected it will
+be resampled as desired. Verbose and log file parameters have
+been added to log the dispersion operations as desired. The
+record format syntax is available in the \fBiids/irs\fR packages.
+.LS DOPCOR
+This task is new.
+.LS FITPROFS
+This task is new.
+.LS FLATDIV
+Moved to the \fBiids/irs\fR packages.
+.LS FLATFIT
+Moved to the \fBiids/irs\fR packages.
+.LS IDENTIFY
+The principle revision is to allow multiple aperture images and
+long slit spectra to be treated as a unit. New keystrokes
+allow jumping or scrolling within multiple spectra in a single
+image. For aperture spectra the database entries are
+referenced by image name and aperture number and not with image
+sections. Thus, \fBidentify\fR solutions are not tied to specific
+image lines in this case. There is a new autowrite parameter
+which may be set to eliminate the save to database query upon
+exiting. The new colon command "add" may be used to add
+features based on some other spectrum or arc type and then
+apply the fit to the combined set of features.
+.LS LCALIB
+This task has a more compact listing for the "stars" option and
+allows paging a list of stars when the star name query is not
+recognized.
+.LS MKSPEC
+This task is unchanged.
+.LS NAMES
+This task is unchanged.
+.LS NDPREP
+This task was moved from the \fBproto\fR package. It was originally
+written at CTIO for CTIO data. It's functionality is largely
+unchanged though it has been updated for changes in the
+\fBonedspec\fR package.
+.LS OBSERVATORY
+New version of this task moved to \fBnoao\fR root package.
+.LS POWERCOR
+Moved to the \fBiids\fR package.
+.LS PROCESS
+Moved to the \fBiids/irs\fR package.
+.LS REBIN
+This task has been eliminated. Use \fBscopy\fR or \fBdispcor\fR.
+.LS REFSPECTRA
+A group parameter was added to allow restricting assignments by
+observing period; for example by night. The record format
+option was removed and the record format syntax is available in
+the \fBiids/irs\fR packages.
+.LS REIDENTIFY
+This task is a new version with many new features. The new
+features include an interactive options for reviewing
+identifications, iterative rejection of features during
+fitting, automatic addition of new features from a line list,
+and the choice of tracing or using a single master reference
+when reidentifying features in other vectors of a reference
+spectrum. Reidentifications from a reference image to another
+image is done by matching apertures rather than tracing. New
+apertures not present in the reference image may be added.
+.LS SAPERTURES
+This task is new.
+.LS SARITH
+This task is new.
+.LS SCOMBINE
+This task is new.
+.LS SCOPY
+This task is new.
+.LS SENSFUNC
+The latitude parameter has been replaced by the observatory
+parameter. The 'i' flux calibrated graph type now shows flux
+in linear scaling while the new graph type 'l' shows flux in
+log scaling. A new colon command allows fixing the flux limits
+for the flux calibrated graphs.
+.LS SETDISP
+This task has been eliminated. Use \fBhedit\fR or the package
+DISPAXIS parameter.
+.LS SEXTRACT
+Replaced by \fBscopy\fR.
+.LS SFIT
+This task is new.
+.LS SFLIP
+This task has been eliminated. Use image sections.
+.LS SHEDIT
+This task has been eliminated. Use \fBhedit\fR if needed.
+.LS SINTERP
+This task is unchanged.
+.LS SLIST
+This task was revised to be relevant for the current spectral
+image formats. The old version is still available in the
+\fBiids/irs\fR package.
+.LS SPECPLOT
+New parameters were added to select apertures and bands, plot
+additional dimensions (for example the additional output from
+the extras option in \fBapextract\fR), suppress the system ID banner,
+suppress the Y axis scale, output a logfile, and specify the
+plotting units. The PTYPE parameter now allows negative
+numbers to select histogram style lines. Interactively, the
+plotting units may be changed and the 'v' key allows setting a
+velocity scale zero point with the cursor. The new version
+supports the new spectral WCS features including nonlinear
+dispersion functions.
+.LS SPLOT
+This is a new version with a significant number of changes. In
+addition to the task changes the other general changes to the
+spectroscopy packages also apply. In particular, long slit
+spectra and spectra with nonlinear dispersion functions may be
+used with this task. The image header or package dispaxis and
+nsum parameters allow automatically extracting spectra from 2D
+image. The task parameters have been modified primarily to
+obtain the desired initial graph without needing to do it
+interactively. In particular, the new band parameter selects
+the band in 3D images, the units parameter selects the
+dispersion units, and the new histogram, nosysid, and xydraw
+options select histogram line type, whether to include a system
+ID banner, and allow editing a spectrum using different
+endpoint criteria.
+.LS
+Because nearly every key is used there has been some shuffling,
+consolidating, or elimination of keys. One needs to check the
+run time '?' help or the help to determine the key changes.
+.LS
+Deblending may now use any number of components and
+simultaneous fitting of a linear background. A new simplified
+version of gaussian fitting for a single line has been added in
+the 'k' key. The old 'k', 'h', and 'v' equivalent width
+commands are all part of the single 'h' command using a second
+key to select a specific option. The gaussian line model from
+these modes may now be subtracted from the spectrum in the same
+way as the gaussian fitting. The one-sided options, in
+particular, are interesting in this regard as a new capability.
+.LS
+The arithmetic functions between two spectra are now done in
+wavelength with resampling to a common dispersion done
+automatically. The 't' key now provides for the full power of
+the ICFIT package to be used on a spectrum for continuum
+normalization, subtraction, or line and cosmic ray removal.
+The 'x' editing key may now use the nearest pixel values rather
+than only the y cursor position to replace regions by straight
+line segments. The mode is selected by the task option
+parameter "xydraw".
+.LS
+Control over the graph window (plotting limits) is better
+integrated so that redrawing, zooming, shifting, and the \fBgtools\fR
+window commands all work well together. The new 'c' key resets
+the window to the full spectrum allowing the 'r' redraw key to
+redraw the current window to clean up overplots from the
+gaussian fits or spectrum editing.
+.LS
+The dispersion units may now be selected and changed to be from
+hertz to Mev and the log or inverse (for wave numbers) of units
+taken. As part of the units package the 'v' key or colon
+commands may be used to plot in velocity relative to some
+origin. The $ key now easily toggles between the dispersion
+units (whatever they may be) and pixels coordinates.
+.LS
+Selection of spectra has become more complex with multiaperture
+and long slit spectra. New keys allow selecting apertures,
+lines, columns, and bands as well as quickly scrolling through
+the lines in multiaperture spectra. Overplotting is also more
+general and consistent with other tasks by using the 'o' key to
+toggle the next plot to be overplotted. Overplots, including
+those of the gaussian line models, are now done in a different
+line type.
+.LS
+There are new colon commands to change the dispersion axis and
+summing parameters for 2D image, to toggle logging, and also to
+put comments into the log file.
+.LS STANDARD
+Giving an unrecognized standard star name will page a list of
+standard stars available in the calibration directory and then
+repeat the query.
+.LS SUBSETS
+Moved to the \fBiids/irs\fR packages.
+.LS SUMS
+Moved to the \fBiids/irs\fR packages.
diff --git a/noao/onedspec/doc/sys/revisions.v3.ms b/noao/onedspec/doc/sys/revisions.v3.ms
new file mode 100644
index 00000000..1c3da8be
--- /dev/null
+++ b/noao/onedspec/doc/sys/revisions.v3.ms
@@ -0,0 +1,382 @@
+.nr PS 9
+.nr VS 11
+.RP
+.ND
+.TL
+ONEDSPEC Package Revisions Summary: IRAF Version 2.10
+.AU
+Francisco Valdes
+.AI
+IRAF Group - Central Computer Services
+.K2
+P.O. Box 26732, Tucson, Arizona 85726
+July 1990
+.AB
+This paper summarizes the changes in Version 3 of the IRAF \fBonedspec\fR
+package which is part of IRAF Version 2.10. The major new features and
+changes are:
+
+.IP \(bu
+\fBIdentify\fR and \fBreidentify\fR now treat multispec format spectra
+and two dimensional images as a unit.
+.IP \(bu
+\fBReidentify\fR supports both tracing (the old method) and always starting
+with the primary reference vector when reidentifying other vectors in a
+two dimensional reference image.
+.IP \(bu
+\fBReidentify\fR matches reference lines or apertures when reidentifying
+those vectors in different images rather than tracing.
+.IP \(bu
+\fBReidentify\fR has an interactive capability to review
+suspect reidentifications.
+.IP \(bu
+\fBReidentify\fR provides the capability to add new features.
+.IP \(bu
+The task \fBmsdispcor\fR provides for spatial interpolation of wavelength
+zero point shifts from simultaneous arc spectra.
+.IP \(bu
+The new task \fBscopy\fR copies subsets of apertures and does format
+conversions between the different spectrum formats.
+.IP \(bu
+The new task \fBsapertures\fR adds or modifies beam numbers and
+aperture titles for selected apertures based on an aperture
+identification file.
+.IP \(bu
+The new task \fBsfit\fR fits spectra and outputs the fits in various ways.
+Apertures in multispec and echelle format are fit independently.
+.IP \(bu
+The task \fBcontinuum\fR now does independent fits for multispec and
+echelle format spectra.
+.IP \(bu
+\fBSplot\fR now allows deblending of any number of components and
+allows simultaneous fitting of a linear background.
+.IP \(bu
+The new task \fBfitprofs\fR fits 1D gaussian profiles in images.
+.AE
+.NH
+Introduction
+.PP
+Though most of the ONEDSPEC package is unchanged there have been
+significant changes to a number of the commonly used tasks in IRAF
+Version 2.10. The changes will be made available as part of an
+external package prior to the release of V2.10. This paper summarizes
+the changes and new features. The changes primarily apply to multispec
+or echelle format spectra.
+.PP
+Tables 1 and 2 summarize most of the major and minor changes in the package.
+
+.ce
+TABLE 1: Summary of Major New Features and Changes
+
+.IP \(bu
+\fBIdentify\fR and \fBreidentify\fR now treat multispec format spectra
+and two dimensional images as a unit allowing easy movement between
+different image lines or columns. The database is only updated upon
+exiting the image.
+.IP \(bu
+\fBReidentify\fR supports both tracing (the old method) and always starting
+with the primary reference vector when reidentifying other vectors in a
+two dimensional reference image.
+.IP \(bu
+\fBReidentify\fR matches reference lines or apertures when reidentifying
+those vectors in different images rather than tracing.
+.IP \(bu
+\fBReidentify\fR has an interactive capability to review
+suspect reidentifications.
+.IP \(bu
+\fBReidentify\fR provides the capability to add new features.
+.IP \(bu
+The task \fBmsdispcor\fR allows using
+auxilary reference spectra to provide a shift in the wavelength
+zero point to the primary dispersion functions. This includes
+spatial interpolation of simultaneous arc spectra in multifiber
+spectrographs.
+.IP \(bu
+The new task \fBscopy\fR copies subsets of apertures and does format
+conversions between the different spectrum formats.
+.IP \(bu
+The new task \fBsapertures\fR adds or modifies beam numbers and
+aperture titles for selected apertures based on an aperture
+identification file.
+.IP \(bu
+The new task \fBsfit\fR fits spectra and outputs the fits in various ways.
+This includes a new feature to replace deviant points by the fit.
+Apertures in multispec and echelle format are fit independently.
+.IP \(bu
+The task \fBcontinuum\fR now does independent fits for multispec and
+echelle format spectra.
+.IP \(bu
+\fBSplot\fR now allows deblending of any number of components and
+allows simultaneous fitting of a linear background.
+.IP \(bu
+The new task \fBfitprofs\fR fits 1D gaussian profiles to spectral lines or
+features in an image line or column. This is done noniteractively and
+driven by an input list of feature positions.
+.bp
+.LP
+.ce
+TABLE 2: Summary of Other New Features and Changes
+
+.IP \(bu
+The \fBidentify\fR database format uses aperture numbers rather than
+image sections for multispec format spectra.
+.IP \(bu
+The apertures in multispec format images need not be in the same order
+or even the same number of apertures as the reference image in
+\fBreidentify\fR or \fBmsdispcor\fR.
+.IP \(bu
+An automatic write parameter has been added to \fBidentify\fR.
+.IP \(bu
+The tasks \fBmsdispcor\fR and \fBspecplot\fR support the extra information
+in the third dimension of multispec format spectra which is optionally
+output by the \fBapextract\fR package.
+.IP \(bu
+\fBMsdispcor\fR and \fBspecplot\fR now include a logfile.
+.IP \(bu
+\fBSplot\fR selects spectra from multispec or echelle format by their
+aperture number. Also a new keystroke was added to select a new
+line/aperture without having to enter the image name again.
+.IP \(bu
+The task \fBspecplot\fR may select apertures from a multispec or
+echelle format spectrum.
+.IP \(bu
+The aperture identification in multispec format is used, if present,
+for labeling in \fBsplot\fR, \fBspecplot\fR, and \fBstandard\fR.
+.NH
+IDENTIFY and REIDENTIFY
+.PP
+These tasks have been modified for greater flexibility when dealing with
+two dimensional images and multispec format spectra in particular. These
+tasks were initially designed primarily to work on one dimensional images
+with provisions for two dimensional images through image sections and
+tracing to other parts of the image. Now these tasks treat such images
+as a unit.
+.PP
+The task \fBidentify\fR has three added keystrokes, 'j', 'k', and 'o'.
+These provide for moving between lines and columns of a two dimensional
+image and different apertures in a multispec format spectrum. When
+changing vectors the last set of features and fit are recalled, if they
+have been previously defined, or the last set of features and fit are
+inherited. For efficiency and to minimize queries, the feature
+information from all the lines or apertures is not written to the
+database until you quit the image (or explicitly write it) rather than
+one at a time. A new parameter was also added, \fIautowrite\fR, which
+may be set to automatically write the results to the database rather
+than querying as is currently done.
+.PP
+The format of the database entries have also been slightly modified in
+the case of multispec format images. Instead of using image sections
+as part of the image name to define different vectors in the image
+(this is still the case for regular two dimensional images) the aperture
+number is recorded. This decouples the solutions for an aperture from
+the specific image line allowing reference images to have a different
+aperture order and additional or missing apertures.
+.PP
+While the changes to \fBidentify\fR are minor as far as usage, the task
+\fBreidentify\fR is quite different and is essentially a new program.
+Much of the complexity in this task relates to two dimensional images.
+Two additions that apply to both one and two dimensional images is the
+capability to add features from a coordinate list and to interactively
+review the reidentifications using \fBidentify\fR. The addition of new
+features may be useful in cases where the signal-to-noise varies or to
+compensate for lost features when tracing across an image. The review
+capability first prints the statistical results and then ask the user if
+they want to examine the results interactively . This allows
+basing the decision to interactively examine the features and fit based
+on this information. Ideally, only a few of the worst cases need be
+examined interactively.
+.PP
+There are two phases of reidentifications which apply to two
+dimensional and multispec format images. In the first phase, one needs
+to expand the identifications in the reference image from an initial,
+interactively defined line, column, or aperture to other parts of the
+reference image. A very important change is that there are now two
+ways to transfer the features list; by successive steps (tracing) using
+the previous results as a starting point (the only method provided in
+the previous version) or always starting from the original reference
+list. The first method is suitable for long slit spectra which have
+significant positional trends across the image. If a feature is lost,
+however, the feature remains missing (barring automatic addition as
+mentioned above) for all following lines or columns. The latter method
+is best if there are only small variations relative to the initial
+reference or in multispec format spectra where there is no inherent
+relation between apertures.
+.PP
+The second phase of reidentifications is between the reference image
+and other images. In the previous version the primary reference vector
+was transferred to the new image and then tracing would be applied
+again. This compounds the problem with losing features during tracing
+and prevents any possible reidentifications from multispec images in
+which the wavelength range may vary greatly. In the new version there
+is a direct reidentification from the same line, column, or aperture in
+the reference to that of the next image. In the case where different
+apertures may have significantly different wavelength coverage, as
+occurs with aperture masks, it will at least be possible to
+interactively identify features and coordinate functions for each
+aperture, using the scrolling capability in the new \fBidentify\fR, in
+just a single image and then correctly transfer the features to
+additional images.
+.PP
+For multispec format spectra the database information is organized by
+aperture number independent of image line number. Thus, it is possible
+to reidentify features in multispec format spectra even if the aperture
+order is different. If there is only a partial overlap in the aperture
+set only those apertures having an entry in the reference image will be
+done.
+.NH
+MSDISPCOR
+.PP
+The task \fBmsdispcor\fR dispersion corrects (rebins to a linear
+dispersion function) multispec format spectra. It was introduced in
+V2.8 of IRAF in the prototype \fBimred.msred\fR package. A number of
+changes have been made in this task as summarized here.
+.PP
+The most fundamental change is support for spatial interpolation of
+reference dispersion functions from a subset of apertures to other
+apertures originating at different positions in a two dimensional
+image. This is primarily intended for the case of comparison arc
+spectra which are interspersed with object spectra in multifiber
+spectrographs. It would also be useful in digitized photographic
+spectra having calibration spectra exposed next to the object
+spectrum. While usable directly, this feature is intended for the
+processing scripts in the new \fBimred\fR fiber instrument packages.
+.PP
+The interpolation is only for a wavelength zero point shift, as determined
+by \fBreidentify\fR with \fIrefit\fR=no. The full dispersion function
+is still provided by a calibration image covering all apertures. Thus,
+the simultaneous arc apertures are used to monitor shifts in the
+detector relative to the full calibration which includes the relative
+differences between each aperture and the arc monitoring apertures.
+.PP
+The multispec spectra containing the apertures used for the spatial
+wavelength zero point corrections are specified in the image header
+using the keywords REFSHFT1 and REFSHFT2. These are analogous to
+the REFSPEC keywords used to define the reference dispersion functions
+for the apertures.
+.PP
+As part of the general theme of multispec format support the
+multispec dispersion reference spectra may have additional spectra and
+need not be in the same order. However, all aperture in the
+images being dispersion corrected must have dispersion relations
+in the database. Multispec format spectra may include additional
+data in the 3rd image dimension produced by the new
+\fBapextract\fR package. \fBMsdispcor\fR rebins this information
+in the same way as the spectra, thus, preserving the information
+but now in linear wavelength sampling.
+.PP
+A new parameter, \fIlogfile\fR, has been added to capture information
+about the dispersion correction process.
+.NH
+SCOPY and SAPERTURES
+.PP
+The task \fBscopy\fR is intended to bridge the gap between the various
+spectrum formats and provide a tool to flexibly manipulate multispec
+format spectra. It replaces the more primitve tasks
+\fBmsred.msselect\fR and \fBechelle.ecselect\fR. Basically, this task
+copies all or selected spectra from one format to a new image or images
+of the same or different format. The typical uses are:
+
+.IP \(bu
+Extract selected spectra from a multispec format image.
+.IP \(bu
+Allow converting the voluminous onedspec format from previous reductions
+done before the multispec format was introduced into the more compact
+multispec format.
+.IP \(bu
+Splice selected apertures from different multispec images into a new
+multispec image.
+.IP \(bu
+Provide a quick way to convert lines or columns from two dimensional
+long slit images into one dimensional spectra. This replaces
+the task \fBproto.toonedspec\fR.
+.PP
+Because \fBscopy\fR can easily change the number and order of apertures
+in the multispec image format it is important that the other tasks which
+use the multispec format have been modified to be insensitive to which
+line a spectrum is in and generally key off the aperture number.
+.PP
+The task \fBsapertures\fR is a simple way to set the aperture identifications,
+APID keyword, and beam number, second field of APNUM keyword, based on
+the aperture number and a simple text file. The text file contains lines
+with aperture number, beam number, and (optional) title. This file is
+used by the \fBapextract\fR package as well. Its likely usage is
+to change image titles which might be wrong because of being inherited
+from an aperture reference image during extraction.
+.NH
+SFIT, CONTINUUM, and ECCONTINUUM
+.PP
+The original version of \fBcontinuum\fR was a simple script based on
+the task \fBfit1d\fR. The problem is that \fBfit1d\fR is intended to
+process all the lines or columns in a two dimensional image
+noninteractively. To do this it applies the same fitting parameters to
+every line or column. The interactive step in this task is simply to
+adjust fitting parameters. For spectra, particularly multispec and
+echelle format spectra, one often needs to fit each spectrum
+interactively and independently. When this problem was encountered for
+the \fBechelle\fR package Rob Seaman wrote a nice program,
+\fBeccontinuum\fR, which allows fitting a set of orders and keeps track
+of which orders have been fit.
+.PP
+The general feature of the continuum fitting tasks is that they fit
+spectra using the \fBicfit\fR interactive function fitting interface.
+The results of the fit may be output as the fit itself, the difference
+or residuals, the ratio, or the input data with rejected points replaced
+by the fitted values. The last feature is new an provides a useful
+spectrum cleaning option. The general equivalent to \fBfit1d\fR is
+the new task \fBsfit\fR which provides the same independent fitting and
+image line selection capabilites as \fBeccontinuum\fR. Note this task
+is line oriented and does not select by aperture or order number. The
+revised version of \fBcontinuum\fR is now based on \fBsfit\fR and
+provides the independent continuum fitting capability for onedspec and
+multispec format spectra that \fBeccontinuum\fR provides for echelle
+format spectra. Technically what has been done is that \fBsfit\fR,
+\fBcontinuum\fR, and \fBeccontinuum\fR are the same task; essentially
+the task written by Seaman for echelle data. They differ in the
+default parameters with the continuum fitting task having default
+parameters providing continuum normalization (ratio) output and
+iterative rejection values for excluding lines.
+.NH
+SPLOT, FITPROFS, and SPECPLOT
+.PP
+\fBSplot\fR has been modified to better support multispec and echelle
+format images. The line selection for multispec and echelle format
+spectra is now in terms of the aperture number rather than the image
+line. The aperture title is used in place of the image title
+if present.
+.PP
+The restriction to a maximum of four lines in the gaussian fitting and
+deblending option of \fBsplot\fR has been lifted. Any number of
+lines may be fit simultaneously, though execution time will become
+long for a large number. In addition the fitting allows determining
+a simultaneous linear background as well as using the cursor defined
+points. The positions of the lines to be fit may be marked with
+the cursor, typed in, or read from a file. The last choice is a new
+feature.
+.PP
+In the past many people have used \fBsplot\fR for bulk, noninteractive
+gaussian fitting by going through the trouble of redirecting the cursor
+input, ukey input, text output, and graphics output. The main reason
+this has been done is the lack of a one dimensional gaussian fitting
+task. The task \fBfitprofs\fR has been added to provide simultaneous
+gaussian fitting. This task takes a list of positions and optional
+sigmas and fits gaussians to a list of images or spectra. The lines,
+columns, or apertures may be selected. In addition a linear
+background may be specified or included in the fitting. The output
+consists of any combination of text similiar to the \fBsplot\fR
+logfile, plots showing the data and fit, and image output of the fit or
+the difference. This task is noninteractive; the interactive version
+is the deblend command of \fBsplot\fR. The multiparameter, nonlinear
+fitting software is the same as used in \fBsplot\fR.
+.PP
+\fBFitprofs\fR complements the task \fBstsdas.fitting.ngaussfit\fR from
+the \fBstsdas\fR package (available from the Space Telescope Science
+Institute). This task is similar in character to \fBfit1d\fR and has
+an interactive one dimensional nonlinear function fitting interface
+similar to \fBicfit\fR.
+.PP
+The task \fBspecplot\fR has a new parameter to select apertures to
+plot. Previously there was no way to limit the apertures plotted other
+than with image sections. All associated lines of a multispec
+spectrum (those in the third dimension) are also plotted for the
+selected apertures. This extra information is a new option of the
+\fBapextract\fR package.
diff --git a/noao/onedspec/doc/sys/revisions.v31.ms b/noao/onedspec/doc/sys/revisions.v31.ms
new file mode 100644
index 00000000..f9d6c24f
--- /dev/null
+++ b/noao/onedspec/doc/sys/revisions.v31.ms
@@ -0,0 +1,329 @@
+.nr PS 10
+.nr VS 12
+.RP
+.ND
+.TL
+NOAO Spectroscopy Packages Revisions: IRAF Version 2.10.3
+.AU
+Francisco Valdes
+.AI
+IRAF Group - Central Computer Services
+.K2
+P.O. Box 26732, Tucson, Arizona 85726
+March 1993
+.AB
+This paper summarizes the changes in Version 3.1 of the IRAF/NOAO
+spectroscopy packages, \fBonedspec\fR, \fBlongslit\fR, \fBapextract\fR, and
+those in \fBimred\fR. These changes are part of IRAF Version 2.10.3. A
+list of the revisions is:
+
+.in +2
+.nf
+\(bu A simplified \fIequispec\fR image header format
+\(bu \fIEquispec\fR format allows a larger number of apertures in an image
+\(bu Extensions to allow tasks to work on 3D images
+\(bu New task \fBspecshift\fR for applying a zeropoint dispersion shift
+\(bu Revised \fBsapertures\fR to edit spectrum coordinate parameters
+\(bu Revised \fBdispcor\fR to easily apply multiple dispersion corrections
+\(bu Revised \fBscombine\fR weighting and scaling options
+\(bu Revised \fBscopy\fR to better handle bands in 3D images
+\(bu Revised \fBcalibrate, deredden, dopcor\fR, and \fBspecshift\fR to work on 2D/3D images
+\(bu Extended \fBidentify\fR and \fBreidentify\fR to work on 3D images
+\(bu New color graphics capabilities in \fBsplot, specplot, sensfunc\fR, and \fBidentify\fR
+\(bu All spectral tasks use a common package dispersion axis parameter
+\(bu A more complete suite of tasks in the \fBlongslit\fR package
+\(bu The \fBimred\fR reductions scripts can now be used with any image format
+\(bu A \fIdatamax\fR parameter in the \fBimred\fR reduction scripts for better cleaning
+\(bu Revised the \fBimred\fR reduction scripts to abort on non-CCD processed data
+\(bu Revised fiber reduction tasks to include a scattered light subtraction option
+\(bu Revised fiber reduction tasks to allow as many sky apertures as desired
+\(bu Revised \fBdoslit\fR to take the reference arc aperture from the first object
+\(bu Bug fixes
+.fi
+.in -2
+.AE
+.NH
+Spectral Image Formats and Dispersion World Coordinate Systems
+.LP
+As with the original release of V2.10 IRAF, the primary changes in the
+NOAO spectroscopy
+software in V2.10.3 are in the area of spectral image formats and dispersion
+world coordinate systems (WCS). A great deal was learned from experience
+with the first release and the changes in this release attempt to
+address problems encountered by users. The main revisions are:
+
+.in +2
+.nf
+\(bu A new WCS format called \fIequispec\fR.
+\(bu Extensions to allow use of 3D images with arbitrary dispersion axis.
+\(bu Elimination of limits on the number of apertures in an image under certain conditions.
+\(bu Improved tools for manipulating the spectral coordinate systems.
+\(bu Bug fixes and solutions to problems found in the previous release.
+.fi
+.in -2
+
+In the previous version all images with multiple spectra used a coordinate
+system called \fImultispec\fR. This type of WCS is complex and difficult
+to manipulate by image header editing tools. Only the case of a single
+linearized spectrum per image, sometimes called \fIonedspec\fR format,
+provided a simple header format. However, the \fBapextract\fR package
+used the \fImultispec\fR format even in the case of extracting a single
+spectrum so to get to the simple format required use of \fBscopy\fR.
+.LP
+In many cases all the spectra in a multispectrum image have the same linear
+dispersion function. The new \fIequispec\fR format uses a simple linear
+coordinate system for the entire image. This format is produced by the
+spectral software whenever possible. In addition to being simple and
+compatible with the standard FITS coordinate representation, the
+\fIequispec\fR format also avoids a limitation of the \fImultispec\fR WCS
+on the number of spectra in a single image. This has specific application
+to multifiber spectrographs with more than 250 fibers.
+.LP
+For multiple spectrum data in which the spectra have differing
+dispersion functions (such as echelle orders) or when the spectra are
+not linearized but use nonlinear dispersion functions, the \fImultispec\fR
+format is still used. It is the most general WCS representation.
+The difficulties with modifying this coordinate system, \fBhedit\fR
+cannot be used, are addressed by enhancing the \fBsapertures\fR task
+and by the new task \fBspecshift\fR which covers the common case of
+modifying the dispersion zeropoint.
+.LP
+A feature of the spectral tasks which operate on one dimensional spectra
+is that they can operate on two dimensional long slit spectra by
+specifying a dispersion axis and a summing factor. This feature has
+been extended to three dimensional spectra such as occur with
+Fabry-Perot and multichannel radio synthesis instruments. The
+dispersion axis may be along any axis as specified by the DISPAXIS
+image header keyword or by the \fIdispaxis\fR package parameter. The
+summing factor parameter \fInsum\fR is now a string which may have
+one or two values to allow separate summing factors along two spatial
+axes. Also, some additional tasks which previously did not support this
+feature are \fBcalibrate\fR, \fBderedden\fR, \fBdopcor\fR, and \fBspecshift\fR.
+.LP
+The gory details of the spectral image formats and world coordinate
+systems are laid out in the new help topic \fIspecwcs\fR (also
+available in a postscript version in the IRAF network documentation
+archive as iraf/docs/specwcs.ps.Z).
+.LP
+Some of the bug fixes and solutions to problems found in the previous
+release concerning the image formats and WCS are a problem with the WCS
+dimensionality (WCSDIM keyword) with 3D images and problems reading various
+imported nonstandard formats. It is hoped that all such formats, including
+previous IRAF spectral formats will all be allowed by the software in the
+latest release.
+.NH
+DISPCOR
+.LP
+The previous versions of \fBdispcor\fR, the dispersion correction task, was
+designed to prevent accidental repeated application; it is incorrect to
+apply the dispersion function from the original data to a linearized
+spectrum. However, it is valid to determine a new dispersion solution, say
+from a dispersion calibrated arc, and apply that as a second correction.
+\fBDispcor\fR would not use a new dispersion function, as specified by the
+REFSPEC keywords, if the dispersion calibration flag was set. In order to
+override this the user needed to manually change this flag to indicate the
+spectrum was uncorrected. The problem was that it was difficult to do this
+with \fImultispec\fR format spectra because the flag is part of the complex
+WCS attribute strings.
+.LP
+\fBDispcor\fR was revised to use a different logic to prevent accidental
+recalibration using an unintended dispersion function. The logic is as
+follows. Previously \fBdispcor\fR would simply change the dispersion
+calibration flag after correcting a spectrum while leaving the dispersion
+function reference spectrum keywords alone as a record. The revised
+\fBdispcor\fR keeps this useful record but moves this it to a new keyword
+DCLOGn (where n is a sequential integer). Because the REFSPEC keyword is
+removed after each application of \fBdispcor\fR it now takes an explicit
+act by the user to assign another dispersion function to a spectrum and so
+it is not possible to accidentally reapply the same dispersion function
+twice. Thus this version will apply additional dispersion functions by
+simply adding new REFSPEC keywords. If they are absent the task resamples
+the spectra based on the current dispersion relation as was the case
+before.
+.LP
+The new version can also tell whether the data was calibrated by the
+previous version. In this case the check on the dispersion calibration
+flag is still used so that during the transition users are still protected
+against accidentally applying the same reference dispersion function
+twice. The new task \fBsapertures\fR can now be used to change the
+dispersion calibration flag to override this checking more easily than was
+the case previously.
+.NH
+New Tasks
+.LP
+In this release there is only one completely new task and one task which
+was significantly redesigned. The new task is \fBspecshift\fR. It is
+relatively simple, it adds a zero point shift to the dispersion coordinates
+of spectra. This was the most common request for manipulating the spectral
+world coordinate system. In this regard there was a common confusion about
+the distinction between shifting the coordinate system and shifting the
+pixel data. Generally what people want is to apply a shift such that
+features in the spectrum move to the desired wavelength. One thought is to
+apply the tasks \fBimshift\fR or \fBshiftlines\fR. The surprise is that
+this does not to work. The pixels are actually shifted in the image array,
+but these tasks also apply the same shift to the coordinate system so that
+features in the spectrum remain at the same wavelength. What is really
+required is to leave the pixel data alone and shift only the coordinate
+system. That is what \fBspecshift\fR does.
+.LP
+While one hopefully does not need to directly manipulate the image header
+keywords describing the coordinate system or other aspects of the spectra,
+instead using such tasks as \fBspecshift\fR, there always seem to be cases
+where this is needed or desired. In the V2.10 release of the spectral
+software this was difficult because the general \fImultispec\fR format was
+the norm and it has information encoded in the complex WCS attribute
+strings. As mentioned previously several changes have been made reduce the
+complexity. Now \fIequispec\fR format will generally be the rule and this
+format has keywords which are more easily manipulated with \fBhedit\fR and
+\fBwcsedit\fR. However, the task \fBsapertures\fR was revised to provide
+an editing capability specifically for spectral images, in either
+\fImultispec\fR or \fIequispec\fR format, with options to change various
+parameters globally or aperture-by-aperture.
+.NH
+New Features
+.LP
+There were a number of miscellaneous minor revisions and bug fixes. One of
+the major new capabilities available with V2.10.3 is support for color
+graphics if the graphics device supports it. \fBXgterm\fR supports color
+on X-window systems with color monitors. Several of the spectral tasks
+were modified to use different colors for marks and overplots. These tasks
+include \fBsplot\fR, \fBspecplot\fR, \fBidentify\fR, and \fBsensfunc\fR.
+In the case of \fBsensfunc\fR the user controls the various color
+assignments with a task parameter or \fBgtools\fR colon command while in
+other cases the next available color is used.
+.LP
+There were several changes to \fBscombine\fR equivalent to those in
+\fBimcombine\fR. The weighting, when selected, was changed from the square
+root of the exposure time or spectrum statistics to the value with no
+square root. This corresponds to the more commonly used variance
+weighting. Other options were added to specify the scaling and weighting
+factors. These allow specifying an image header keyword or a file
+containing the scale or weighting factors. A new parameter, "nkeep" has
+been added to allow controlling the maximum number of pixels rejected by the
+clipping algorithms. Previously it was possible to reject all pixels even
+when some of the data was good though with a higher scatter than estimated;
+i.e. all pixels might be greater than 3 sigma from the mean without being
+cosmic rays or other bad values. Finally a parameter \fIsnoise\fR was
+added to include a sensitivity or scale noise component to a Poisson noise
+model.
+.LP
+In \fBsplot\fR the 'p' and 'u' keys which assign and modify the dispersion
+coordinates now include options for applying a zero point shift or a
+doppler shift in addition to defining an absolute wavelength for a feature
+or starting and ending wavelengths. There are also bug fixes to the
+equivalent width calculations, it did not handle flux calibrated data, and
+the scroll keys '(' and ')'.
+.LP
+There were several changes to make it easier to deal with with three
+dimensional \fImultispec\fR and \fIequispec\fR data; that is the additional
+data from the "extras" option in the \fBapextract\fR tasks. One was to fix
+problems associated with an incorrect WCSDIM keyword. This allows use of
+image sections or \fBimcopy\fR for extracting specific bands and
+apertures. Another was to add a "bands" parameter in \fBscopy/sarith\fR to
+allow selection of bands. Also the "onedspec" output format in \fBscopy\fR
+copies any selected bands to separate one dimensional images.
+.LP
+As mentioned earlier, many of the \fBonedspec\fR tasks have been extended
+to work on 2D and 3D spatial spectra. Some tasks which now have this
+capability in this version and not the previous one are \fBcalibrate\fR and
+\fBdopcor\fR. \fBIdentify\fR and \fBredentify\fR were extended to operate
+on 3D images. This involved extending the syntax for the section parameter
+selecting the image vector and the parameter specifying any summing
+across the vector direction.
+.NH
+LONGSLIT
+.LP
+With the applicability of more \fBonedspec\fR tasks to long slit data
+the \fBlongslit\fR package was modified to add many new tasks.
+This required adding additional package parameters. One new task
+to point out is \fBcalibrate\fR. This task is now the preferred one
+to use for extinction and flux calibration of long slit spectra
+rather than the obsolete \fBextinction\fR and \fBfluxcalib\fR.
+The obsolete tasks are still present in this release.
+.NH
+APEXTRACT
+.LP
+The \fBapextract\fR package had a few, mostly transparent, changes. In
+the previous version the output image header format was always \fImultispec\fR
+even when there was a single spectrum, either because only one aperture
+was defined or because the output format parameter was "onedspec".
+In this release the default WCS format is the simpler \fIequispec\fR.
+.LP
+In the \fBonedspec\fR and \fBimred\fR spectral reduction packages there is
+a dispersion axis package parameter which is used to defined the dispersion
+axis for images without a DISPAXIS keyword. This applies to all tasks.
+However, the \fBapextract\fR tasks had the dispersion axis defined by their
+own task parameters resulting in some confusion. To make things consistent
+the dispersion axis parameter in \fBapextract\fR has been moved from the
+tasks to a package parameter. Now in the \fBimred\fR spectral reduction
+packages, there is just one dispaxis parameter in the package parameters
+which applies to all tasks in those packages, both those from
+\fBonedspec\fR and those from \fBapextract\fR.
+.LP
+Some hidden algorithm parameters were adjusted so that the cleaning and
+variance weighting options perform better in some problem cases without
+requiring a great deal of knowledge about things to tweak.
+.NH
+IMRED Spectroscopic Reduction Tasks
+.LP
+The various spectroscopic reductions tasks, those beginning with "do", have
+had some minor revisions and enhancements in addition to those which apply
+to the individual tasks which make up these scripts. In the latter class
+is the output WCS format is \fBequispec\fR except for the echelle tasks and
+when dispersion linearization is not done. Related to this is that the
+multifiber tasks can operate on data with more than 250 fibers which was a
+limitation of the \fBmultispec\fR format.
+.LP
+In the previous version only the OIF format images were allowed (the ".imh"
+extensions). This has been generalized to allow selecting the image format
+by setting the environment parameter \fIimtype\fR. Only images with the
+specified extension will be processed and created.
+.LP
+The dispersion axis parameter in the reduction tasks and in the other tasks
+in the \fBimred\fR spectroscopy packages, such as the \fBapextract\fR
+tasks, is now solely a package parameter.
+.LP
+All the scripts now check the input spectra for the presence of the CCDPROC
+keyword and abort if it is not found. This keyword indicates that the data
+have been processed for basic CCD calibrations, though it does not check
+the operations themselves. For data reduced using \fBccdproc\fR this
+keyword will be present. If these tasks are used on data not processed by
+\fBccdproc\fR then it is a simple matter to add this keyword with
+\fBhedit\fR. Obviously, the purpose of this change is to avoid
+inadvertently operating on raw data.
+.LP
+All the "do" tasks now have a parameter "datamax". This minimizes the
+effects of very strong cosmic rays during the extraction of object spectra;
+it does not apply to flat field or arc spectra. When there is a very large
+difference between data pixel values and cosmic ray pixel values,
+especially true for very weak spectra, the cosmic ray cleaning operation
+does not always work well. If it is possible to specify a threshold value
+between the maximum real data value and cosmic rays then the cosmic ray
+cleaning can be significantly improved by immediately rejecting those
+pixels above the threshold. Of course the user must be careful that real
+data does not exceed this value since such data will be excluded.
+.LP
+The fiber reduction tasks, \fBdoargus, dohydra, dofibers, dofoe\fR, and
+\fBdo3fiber\fR have a new processing option for subtracting scattered
+light. This is particularly useful if there is significant scattered light
+in producing uniform sky spectra for sky subtraction since the fiber
+throughput calibration does not generally correct for this.
+.LP
+The fiber reduction tasks also had a limit on the number of sky fibers
+which could be used with the interactive sky editing. This limit has
+been eliminated so that it is possible, for example, to have one object
+fiber and 99 sky fibers.
+.LP
+The slit reduction task \fBdoslit\fR previously required that the spectrum
+for the reference arc cover the middle of the input data images. There
+were cases of instrument configurations where this was not true requiring
+additional manipulation to use this task. This requirement has been
+eliminated. Instead when the reference arc needs to be extracted it uses
+the aperture definition from one of the input object spectra since
+definition of the object apertures occurs prior to setting up the
+dispersion calibration.
+.LP
+In addition the task \fBdoslit\fR and \fBdoecslit\fR had a bug in which
+the order of the arcs specified by the user was ignored and alphabetical
+order was used instead. This has been fixes so that the first arc
+specified by the use is the reference arc.
diff --git a/noao/onedspec/doc/sys/revisions.v31.ms.bak b/noao/onedspec/doc/sys/revisions.v31.ms.bak
new file mode 100644
index 00000000..1c7c3b31
--- /dev/null
+++ b/noao/onedspec/doc/sys/revisions.v31.ms.bak
@@ -0,0 +1,307 @@
+.nr PS 9
+.nr VS 11
+.RP
+.ND
+.TL
+NOAO Spectroscopy Packages Revision Summary: IRAF Version 2.10.3
+.AU
+Francisco Valdes
+.AI
+IRAF Group - Central Computer Services
+.K2
+P.O. Box 26732, Tucson, Arizona 85726
+March 1993
+.AB
+This paper summarizes the changes in Version 3.1 of the IRAF/NOAO
+spectroscopy packages, \fBonedspec\fR, \fBlongslit\fR, \fBapextract\fR,
+and those in \fBimred\fR. These changes are part of
+part of IRAF Version 2.10.3. A list of the revisions is:
+
+.nf
+\(bu A simplified \fIequispec\fR image header format
+\(bu \fIEquispec\fR format allows a larger number of apertures in an image
+\(bu Extensions to allow tasks to work on 3D images
+\(bu New task \fBspecshift\fR for applying a zeropoint dispersion shift
+\(bu Revised \fBsapertures\fR to edit spectrum coordinate parameters
+\(bu Revised \fBdispcor\fR to easily apply multiple dispersion corrections
+\(bu Revised \fBscombine\fR weighting and scaling options
+\(bu Revised \fBscopy\fR to better handle bands in 3D images
+\(bu Revised \fBcalibrate, dopcor\fR, and \fBspecshift\fR to work on 2D/3D images
+\(bu New color graphics capabilities in \fBsplot, specplot, sensfunc\fR, and \fBidentify\fR
+\(bu All spectral tasks use a common package dispersion axis parameter
+\(bu A more complete suite of tasks in the \fBlongslit\fR package
+\(bu A \fIdatamax\fR parameter in the \fBimred\fR reduction scripts for better cleaning
+\(bu Revised the \fBimred\fR reduction scripts to abort on non-CCD processed data
+\(bu Revised fiber reduction tasks to include a scattered light subtraction option
+\(bu Revised \fBdoslit\fR to take the reference arc aperture from the first object
+\(bu Bug fixes
+.fi
+.AE
+.NH
+Spectral Image Formats and Dispersion World Coordinate Systems
+.LP
+As with the original release of V2.10 IRAF, the primary changes in the
+NOAO spectroscopy
+software in V2.10.3 are in the area of spectral image formats and dispersion
+world coordinate systems (WCS). A great deal was learned from experience
+with the first release and the changes in this release attempt to
+address problems encountered by users. The main revisions are:
+
+.in +4
+.nf
+\(bu A new WCS format called \fIequispec\fR.
+\(bu Extensions to allow use of 3D images with arbitrary dispersion axis.
+\(bu Elimination of limits on the number of apertures in an image under certain conditions.
+\(bu Improved tools for manipulating the spectral coordinate systems.
+\(bu Bug fixes and solutions to problems found in the previous release.
+.fi
+.in - 4
+
+In the previous version all images with multiple spectra used a coordinate
+system called \fImultispec\fR. This type of WCS is complex and difficult
+to manipulate by image header editting tools. Only the case of a single
+linearized spectrum per image, sometimes called \fIonedspec\fR format,
+provided a simple header format. However, the \fBapextract\fR package
+used the \fImultispec\fR format even in the case of extracting a single
+spectrum so to get to the simple format required use of \fBscopy\fR.
+.LP
+In many cases all the spectra in a multispectrum image have the same linear
+dispersion function. The new \fIequispec\fR format uses a simple linear
+coordinate system for the entire image. This format is produced by the
+spectral software whenever possible. In addition to being simple and
+compatible with the standard FITS coordinate representation, the
+\fIequispec\fR format also avoids a limitation of the \fImultispec\fR WCS
+on the number of spectra in a single image. This has specific application
+to multifiber spectrographs with more than 250 fibers.
+.LP
+For multiple spectrum data in which the spectra have differing
+dispersion functions (such as echelle orders) or when the spectra are
+not linearized but use nonlinear dispersion functions, the \fImultispec\fR
+format is still used. It is the most general WCS representation.
+The difficulties with modifying this coordinate system, \fBhedit\fR
+cannot be used, are addressed by enhancing the \fBsapertures\fR task
+and by the new task \fBspecshift\fR which covers the common case of
+modifying the dispersion zeropoint.
+.LP
+A feature of the spectral tasks which operate on one dimensional spectra
+is that they can operate on two dimensional long slit spectra by
+specifying a dispersion axis and a summing factor. This feature has
+been extended to three dimensional spectra such as occur with
+Fabry-Perot and multichannel radio synthesis instruments. The
+dispersion axis may be along any axis as specified by the DISPAXIS
+image header keyword or by the \fIdispaxis\fR package parameter. The
+summing factor parameter \fInsum\fR is now a string which may have
+one or two values to allow separate summing factors along two spatial
+axes. Also, some additional tasks which previously did not support this
+feature are \fBcalibrate\fR, \fBdopcor\fR, and \fBspecshift\fR.
+.LP
+The gory details of the spectral image formats and world coordinate
+systems are laid out in the new help topic \fIspecwcs\fR (also
+available in a postscript version in the IRAF network documentation
+archive as iraf/docs/specwcs.ps.Z).
+.LP
+Some of the bug fixes and solutions to problems found in the previous
+release concerning the image formats and WCS are a problem with the WCS
+dimensionality (WCSDIM keyword) with 3D images and problems reading various
+imported nonstandard formats. It is hoped that all such formats, including
+previous IRAF spectral formats will all be allowed by the software in the
+latest release.
+.NH
+DISPCOR
+.LP
+The previous versions of \fBdispcor\fR, the dispersion correction task, was
+designed to prevent accidental repeated application; it is incorrect to
+apply the dispersion function from the original data to a linearized
+spectrum. However, it is valid to determine a new dispersion solution, say
+from a dispersion calibrated arc, and apply that as a second correction.
+\fBDispcor\fR would not use a new dispersion function, as specified by the
+REFSPEC keywords, if the dispersion calibration flag was set. In order to
+override this the user needed to manually change this flag to indicate the
+spectrum was uncorrected. The problem was that it was difficult to do this
+with \fImultispec\fR format spectra because the flag is part of the complex
+WCS attribute strings.
+.LP
+\fBDispcor\fR was revised to use a different logic to prevent accidental
+recalibration using an unintended dispersion function. The logic is as
+follows. Previously \fBdispcor\fR would simply change the dispersion
+calibration flag after correcting a spectrum while leaving the dispersion
+function reference spectrum keywords alone as a record. The revised
+\fBdispcor\fR keeps this useful record but moves this it to a new keyword
+DCLOGn (where n is a sequential integer). Because the REFSPEC keyword is
+removed after each application of \fBdispcor\fR it now takes an explicit
+act by the user to assign another dispersion function to a spectrum and so
+it is not possible to accidentally reapply the same dispersion function
+twice. Thus this version will apply additional dispersion functions by
+simply adding new REFSPEC keywords. If they are absent the task resamples
+the spectra based on the current dispersion relation as was the case
+before.
+.LP
+The new version can also tell whether the data was calibrated by the
+previous version. In this case the check on the dispersion calibration
+flag is still used so that during the transition users are still protected
+against accidentally applying the same reference dispersion function
+twice. The new task \fBsapertures\fR can now be used to change the
+dispersion calibration flag to override this checking more easily than was
+the case previously.
+.NH
+New Tasks
+.LP
+In this release there is only one completely new task and one task which
+was significantly redesigned. The new task is \fBspecshift\fR. It is
+relatively simple, it adds a zero point shift to the dispersion coordinates
+of spectra. This was the most common request for manipulating the spectral
+world coordinate system. In this regard there was a common confusion about
+the distinction between shifting the coordinate system and shifting the
+pixel data. Generally what people want is to apply a shift such that
+features in the spectrum move to the desired wavelength. One thought is to
+apply the tasks \fBimshift\fR or \fBshiftlines\fR. The surprise is that
+this does not to work. The pixels are actually shifted in the image array,
+but these tasks also apply the same shift to the coordinate system so that
+features in the spectrum remain at the same wavelength. What is really
+required is to leave the pixel data alone and shift only the coordinate
+system. That is what \fBspecshift\fR does.
+.LP
+While one hopefully does not need to directly manipulate the image header
+keywords describing the coordinate system or other aspects of the spectra,
+instead using such tasks as \fBspecshift\fR, there always seem to be cases
+where this is needed or desired. In the V2.10 release of the spectral
+software this was difficult because the general \fImultispec\fR format was
+the norm and it has information encoded in the complex WCS attribute
+strings. As mentioned previously several changes have been made reduce the
+complexity. Now \fIequispec\fR format will generally be the rule and this
+format has keywords which are more easily manipulated with \fBhedit\fR and
+\fBwcsedit\fR. However, the task \fBsapertures\fR was revised to provide
+an editing cabability specifically for spectral images, in either
+\fImultispec\fR or \fIequispec\fR format, with options to change various
+parameters globally or aperture-by-aperture.
+.NH
+New Features
+.LP
+There were a number of miscellaneous minor revisions and bug fixes. One of
+the major new capabilities available with V2.10.3 is support for color
+graphics if the graphics device supports it. \fBXgterm\fR supports color
+on X-window systems with color monitors. Several of the spectral tasks
+were modified to use different colors for marks and overplots. These tasks
+include \fBsplot\fR, \fBspecplot\fR, \fBidentify\fR, and \fBsensfunc\fR.
+In the case of \fBsensfunc\fR the user controls the various color
+assignments with a task parameter or \fBgtools\fR colon command while in
+other cases the next available color is used.
+.LP
+There were several changes to \fBscombine\fR equivalent to those in
+\fBimcombine\fR. The weighting, when selected, was changed from the square
+root of the exposure time or spectrum statistics to the value with no
+square root. This corresponds to the more commonly used variance
+weighting. Other options were added to specify the scaling and weighting
+factors. These allow specifying an image header keyword or a file
+containing the scale or weighting factors. A new parameter, "nkeep" has
+been added to allow controling the maximum number of pixels rejected by the
+clipping algorithms. Previously it was possible to reject all pixels even
+when some of the data was good though with a higher scatter than estimated;
+i.e. all pixels might be greater than 3 sigma from the mean without being
+cosmic rays or other bad values. Finally a parameter \fIsnoise\fR was
+added to include a sensitivity or scale noise component to a Poisson noise
+model.
+.LP
+In \fBsplot\fR the 'p' and 'u' keys which assign and modify the dispersion
+coordinates now include options for applying a zero point shift or a
+doppler shift in addition to defining an absolute wavelength for a feature
+or starting and ending wavelengths. There are also bug fixes to the
+equivalent width calculations, it did not handle flux calibrated data, and
+the scroll keys '(' and ')'.
+.LP
+There were several changes to make it easier to deal with with three
+dimensional \fImultispec\fR and \fIequispec\fR data; that is the additional
+data from the "extras" option in the \fBapextract\fR tasks. One was to fix
+problems associated with an incorrect WCSDIM keyword. This allows use of
+image sections or \fBimcopy\fR for extracting specific bands and
+apertures. Another was to add a "bands" parameter in \fBscopy/sarith\fR to
+allow selection of bands. Also the "onedspec" output format in \fBscopy\fR
+copies any selected bands to separate one dimensional images.
+.LP
+As mentioned earlier, many of the \fBonedspec\fR tasks have been extended
+to work on 2D and 3D spatial spectra. Some tasks which now have this
+capability in this version and not the previous one are \fBcalibrate\fR and
+\fBdopcor\fR. \fBIdentify\fR and \fBredentify\fR were extended to operate
+on 3D images.
+.NH
+LONGSLIT
+.LP
+With the applicablity of more \fBonedspec\fR tasks to long slit data
+the \fBlongslit\fR package was modified to add many new tasks.
+This required adding additional package parameters. One new task
+to point out is \fBcalibrate\fR. This task is now the prefered one
+to use for extinction and flux calibration of long slit spectra
+rather than the obsolete \fBextinction\fR and \fBfluxcalib\fR.
+The obsolete tasks are still present in this release.
+.NH
+APEXTRACT
+.LP
+The \fBapextract\fR package had a few, mostly transparent, changes. In
+the previous version the output image header format was always \fImultispec\fR
+even when there was a single spectrum, either because only one aperture
+was defined or because the output format parameter was "onedspec".
+In this release the default WCS format is the simpler \fIequispec\fR.
+.LP
+In the \fBonedspec\fR and \fBimred\fR spectral reduction packages there is
+a dispersion axis package parameter which is used to defined the dispersion
+axis for images without a DISPAXIS keyword. This applies to all tasks.
+However, the \fBapextract\fR tasks had the dispersion axis defined by their
+own task parameters resulting in some confusion. To make things consistent
+the dispersion axis parameter in \fBapextract\fR has been moved from the
+tasks to a package parameter. Now in the \fBimred\fR spectral reduction
+packages, there is just one dispaxis parameter in the package parameters
+which applies to all tasks in those packages, both those from
+\fBonedspec\fR and those from \fBapextract\fR.
+.LP
+Some hidden algorithm parameters were adjusted so that the cleaning and
+variance weighting options perform better in some problem cases without
+requiring a great deal of knowledge about things to tweak.
+.NH
+IMRED Spectroscopic Reduction Tasks
+.LP
+The various spectroscopic reductions tasks, those beginning with "do", have
+had some minor revisions and enhancements in addition to those which apply
+to the individual tasks which make up these scripts. In the latter class
+is the output WCS format is \fBequispec\fR except for the echelle tasks and
+when dispersion linearization is not done. Related to this is that the
+multifiber tasks can operate on data with more than 250 fibers which was a
+limitation of the \fBmultispec\fR format.
+.LP
+The dispersion axis parameter in the reduction tasks and in the other tasks
+in the \fBimred\fR spectroscopy packages, such as the \fBapextract\fR
+tasks, is now solely a package parameter.
+.LP
+All the scripts now check the input spectra for the presence of the CCDPROC
+keyword and abort if it is not found. This keyword indicates that the data
+have been processed for basic CCD calibrations, though it does not check
+the operations themselves. For data reduced using \fBccdproc\fR this
+keyword will be present. If these tasks are used on data not processed by
+\fBccdproc\fR then it is a simple matter to add this keyword with
+\fBhedit\fR. Obviously, the purpose of this change is to avoid
+inadvertently operating on raw data.
+.LP
+All the "do" tasks now have a parameter "datamax". This minimizes the
+effects of very strong cosmic rays during the extraction of object spectra;
+it does not apply to flat field or arc spectra. When there is a very large
+difference between data pixel values and cosmic ray pixel values,
+especially true for very weak spectra, the cosmic ray cleanning operation
+does not always work well. If it is possible to specify a threshold value
+between the maximum real data value and cosmic rays then the cosmic ray
+cleanning can be significantly improved by immediately rejecting those
+pixels above the threshold. Of course the user must be careful that real
+data does not exceed this value since such data will be excluded.
+.LP
+The fiber reduction tasks, \fBdoargus, dohydra, dofibers, dofoe\fR, and
+\fBdo3fiber\fR have a new processing option for subtracting scattered
+light. This is particularly useful if there is significant scattered light
+in producing uniform sky spectra for sky subtraction since the fiber
+throughput calibration does not generally correct for this.
+.LP
+The slit reduction task \fBdoslit\fR previously required that the spectrum
+for the reference arc cover the middle of the input data images. There
+were cases of instrument configurations where this was not true requiring
+additional manipulation to use this task. This requirement has been
+eliminated. Instead when the reference arc needs to be extracted it uses
+the aperture definition from one of the input object spectra since
+definition of the object apertures occurs prior to setting up the
+dispersion calibration.
diff --git a/noao/onedspec/doc/sys/rvidentify.ms b/noao/onedspec/doc/sys/rvidentify.ms
new file mode 100644
index 00000000..dadab882
--- /dev/null
+++ b/noao/onedspec/doc/sys/rvidentify.ms
@@ -0,0 +1,304 @@
+.RP
+.TL
+Radial Velocity Measurements with IDENTIFY
+.AU
+Francisco Valdes
+.AI
+IRAF Group - Central Computer Services
+.K2
+P.O. Box 26732, Tucson, Arizona 85726
+.AB
+The IRAF task \fBidentify\fR may be used to measure radial velocities.
+This is done using the classical method of determining the doppler shifted
+wavelengths of emission and absorption lines. This paper covers many of
+the features and techniques available through this powerful and versatile
+task which are not immediately evident to a new user.
+.AE
+.NH
+Introduction
+.PP
+The task \fBidentify\fR is very powerful and versatile. It can be used
+to measure wavelengths and wavelength shifts for doing radial velocity
+measurements from emission and absorption lines. When combined with
+the CL's ability to redirect input and output both from the standard
+text streams and the cursor and graphics streams virtually anything may
+be accomplished either interactively or automatically. This, of
+course, requires quite a bit of expertise and experience with
+\fBidentify\fR and with the CL which a new user is not expected to be
+aware of initially. This paper attempts to convey some of the
+possibilities. There are many variations on these methods which the
+user will learn through experience.
+.PP
+I want to make a caveat about the suggestions made in this paper. I wrote
+the \fBidentify\fR task and so I am an expert in its use. However, I am not
+a spectroscopist, I have not been directly involved in the science of
+measuring astronomical radial velocities, and I am not very familiar with
+the literature. Thus, the suggestions contained in this paper are based
+on my understanding of the basic principles and the abilities of the
+\fBidentify\fR task.
+.PP
+The task \fBidentify\fR is used to measure radial velocities by
+determining the wavelengths of individual emission and absorption
+lines. The user must compute the radial velocities separately by
+relating the observed wavelengths to the known rest wavelengths via the
+Doppler formula. This is a good method when the lines are strong, when
+there are only one or two features, and when there are many, possibly,
+weaker lines. The accuracy of this method is determined by the
+accuracy of the line centering algorithm.
+.PP
+The alternative method is to compare an observed
+spectrum to a template spectrum of known radial velocity. This is done
+by correlation or fourier ratio methods. These methods have the
+advantage of using all of the spectrum and are good when there are many
+very weak and possibly broad features. Their disadvantages are
+confusion with telluric lines, they don't work well with just a few
+real features, and they require a fair amount of preliminary
+manipulation of the spectrum to remove continuum and interpolate the
+spectrum in logarithmic wavelength intervals. IRAF tasks for
+correlation and fourier ratio methods are under development at this
+time. Many people assume that these more abstract methods are inherently
+better than the classical method. This is not true, it depends on the
+quality and type of data.
+.PP
+Wavelength measurements are best done on the original data rather than
+after linearizing the wavelength intervals. This is because 1) it is
+not necessary as will be shown below and 2) the interpolation used to
+linearize the wavelength scale can change the shape of the lines,
+particularly strong, narrow emission lines which are the best ones for
+determining radial velocities. A second reason is that
+\fBidentify\fR currently does not recognize the linear wavelength parameters
+produced during linearization. This will be fixed soon but
+in the mean time the lines must be measured in pixels and converted
+later by the user. Alternatively one can determine a linear dispersion solution
+with \fBidentify\fR but this is more work than needed.
+.PP
+This paper is specifically about \fBidentify\fR but one should be aware of the
+task \fBsplot\fR which also may be used to measure radial velocities. It
+differs in several respects from \fBidentify\fR. \fBSplot\fR works only on linearized
+data; the wavelength and pixel coordinates are related by a zero point and
+wavelength interval. The line centering algorithms are different;
+the line centering is generally less robust (tolerant
+of error) and often less accurate. It has many nice features but is
+not designed for the specific purpose of measuring positions of lines
+and, thus, is not as easy to use for this purpose.
+.PP
+There are a number of sources of additional information relating to the
+use of the task \fBidentify\fR. The primary source is the manual pages for
+the task. As with all manual pages it is available online with the
+\fBhelp\fR command and in the \fIIRAF User Handbook\fR. The NOAO
+reduction guides or cookbooks for the echelle and IIDS/IRS include
+additional examples and discussion. The line centering algorithm
+is the most critical factor in determining dispersion solutions and
+radial velocities. It is described in more detail under the help
+topic \fBcenter1d\fR online or in the handbook.
+.NH
+Method 1
+.PP
+In this method arc calibration images are used to determine a wavelength
+scale. The dispersion solution is then transferred to the object spectrum
+and the wavelengths of emission and absorption lines are measured and
+recorded. This is relatively straightforward but some tricks will make
+this easier and more accurate.
+.NH 2
+Transferring Dispersion Solutions
+.PP
+There are several ways to transfer the dispersion solution from an arc
+spectrum to an object spectrum differing in the order in which things are
+done.
+.IP (1)
+One way is to determine the dispersion solution for all the arc images
+first. To do this interactively specify all the arc images as the
+input to \fBidentify\fR. After determining the dispersion solution for
+the first arc and quitting (\fIq\fR key) the next arc will be displayed
+with the previous dispersion solution and lines retained. Then use the
+cursor commands \fIa\fR and \fIc\fR (all center) to recenter and
+recompute the dispersion solution, \fIs\fR to shift to the cursor
+position, recenter, and recompute the dispersion solution, or \fIx\fR
+to correlate features, shift, recenter, and recompute the dispersion
+solution. These commands are relatively fast and simple.
+.IP
+A important reason for doing all the arc images first is that this same
+procedure can be done mostly noninteractively with the task
+\fBreidentify\fR. After determining a dispersion solution for one arc
+image \fBreidentify\fR does the recenter (\fIa\fR and \fIc\fR), shift
+and recenter (\fIs\fR), or correlation features, shift, and recenter
+(\fIx\fR) to transfer the dispersion solutions between arcs. This is
+usually done as a background task.
+.IP
+To transfer the solution to the object spectra specify the list of
+object spectra as input to \fBidentify\fR. For each image begin by
+entering the colon command \fI:read arc\fR where arc is the name of the
+arc image whose dispersion solution is to be applied; normally the one
+taken at the same time and telescope position as the object. This will
+read the dispersion solution and arc line positions. Delete the arc
+line positions with the \fIa\fR and \fId\fR (all delete) cursor keys.
+You can now measure the wavelengths of lines in the spectrum.
+.IP (2)
+An alternative method is to interactively alternate between arc and
+object spectra either in the input image list or with the \fI:image
+name\fR colon command.
+.NH 2
+Measuring Wavelengths
+.PP
+.IP (1)
+To record the feature positions at any time use the \fI:features file\fR
+colon command where file is where the feature information will be written.
+Repeating this with the same file appends to the file. Writing to
+the database with the \fI:write\fR colon command also records this information.
+Without an argument the results are put in a file with the same name as the
+image and a prefix of "id". You can use any name you like, however,
+with \fI:write name\fR. The \fI:features\fR command is probably preferable
+because it only records the line information while the database format
+includes the dispersion solution and other information not needed for
+computing radial velocities.
+.IP (2)
+Remember that when shifting between emission and absorption lines the
+parameter \fIftype\fR must be changed. This may be done interactively with
+the \fI:ftype emission\fR and \fI:ftype absorption\fR commands. This parameter
+does not need to be set except when changing between types of lines.
+.IP (3)
+Since the centering of the emission or absorption line is the most
+critical factor one should experiment with the parameter \fIfwidth\fR.
+To change this parameter type \fI:fwidth value\fR. The positions of the
+marked features are not changed until a center command (\fIc\fR) command
+is given. \fIWarning: The all center (\fIa\fR and \fIc') command automatically
+refits the dispersion solution to the lines which will lose your
+arc dispersion solution.\fR
+.IP
+A narrow \fIfwidth\fR is less influenced by blends and wings but has a larger
+uncertainty. A broad \fIfwidth\fR uses all of the line profile and is thus
+stable but may be systematically influenced by blending and wings. One
+possible approach is to measure the positions at several values of
+\fIfwidth\fR and decide which value to use or use some weighting of the
+various measurements. You can record each set of measurements with
+the \fI:fe file\fR command.
+.IP (4)
+For calibration of systematic effects from the centering one should obtain
+the spectrum of a similar object with a known radial velocity. The systematic
+effect is due to the fact that the centering algorithm is measuring a
+weighted function of the line profile which may not be the true center of
+the line as tabulated in the laboratory or in a velocity standard.
+By using the same centering method on an object with the same line profiles
+and known velocity this effect can be eliminated.
+.IP (5)
+Since the arcs are not obtained at precisely the same time as the object
+exposures there may be a wavelength shift relative to the arc dispersion
+solution. This may be calibrated from night sky lines in the object
+itself (the night sky lines are "good" in this case and should not be
+subtracted away). There are generally not enough night sky lines to act
+as the primary dispersion calibrator but just one can determine a possible
+wavelength zero point shift. Measure the night sky line positions at the same
+time the object lines are measured. Determine a zero point shift from
+the night sky to be taken out of the object lines.
+.NH
+Method 2
+.PP
+This method is similar to the correlation method in that a template
+spectrum is used and the average shift relative to the template measures the
+radial velocity. This has the advantage of not requiring the user to
+do a lot of calculations (the averaging of the line shifts is done by
+\fRidentify\fR) but is otherwise no better than method 1.
+The template spectrum must have the same features as the object spectrum.
+.IP (1)
+Determine a dispersion solution for the template spectrum either from
+the lines in the spectrum or from an arc calibration.
+.IP (2)
+Mark the features to be correlated in the template spectrum.
+.IP (3)
+Transfer the template dispersion solution and line positions to an object
+spectrum using one of the methods described earlier. Then for the
+current feature point the cursor near the same feature in the object
+spectrum and type \fIs\fR. The mean shift in pixels, wavelength, and
+fractional wavelength (like a radial velocity without the factor of
+the speed of light) for the object is determined and printed. A new
+dispersion solution is determined but you may ignore this.
+.IP (4)
+When doing additional object spectra remember to start over again with
+the template spectrum (using \fI:read template\fR) and not the solution
+from the last object spectrum.
+.IP (5)
+This procedure assumes that the dispersion solution between the template
+and object are the same. Checks for zero point shifts with night sky
+lines, as discussed earlier, should be made if possible. The systematic
+centering bias, however, is accounted for by using the same lines from
+the template radial velocity standard.
+.IP (6)
+One possible source of error is attempting to use very weak lines. The
+recentering may find the wrong lines and affect the results. The protections
+against this are the \fIthreshold\fR parameter (in Version 2.4 IRAF) and
+setting the centering error radius to be relatively small.
+.NH
+Method 3
+.PP
+This method uses only strong emission lines and works with linearized
+data without an \fBidentify\fR dispersion solution. \fBIdentify\fR has
+a failing when used with linearized data; it does not know about the
+wavelength parameters in the image header. This will eventually be
+fixed. However, if you have already linearized your spectra and wish
+to use them instead of the nonlinear spectra the following method will
+work. The recipe involves measuring the positions of emission lines in
+pixels which must then be converted to wavelength using the header
+information. The strongest emission lines are found automatically
+using the \fIy\fR cursor key. The number of emission lines to be
+identified is set by the \fImaxfeatures\fR parameter. The emission
+line positions are then written to a data file using the \fI:features
+file\fR colon command. This may be done interactively and takes only a
+few moments per spectrum. If done interactively the images may be
+chained by specifying an image template. The only trick required is
+than when proceeding to the next spectrum the previous features are
+deleted using the cursor key combination \fIa\fR and \fId\fR (all
+delete).
+.PP
+For a large number of images, on the order of hundreds, this may be automated
+as follows. A file containing the cursor commands is prepared.
+The cursor command format consists of the x and y positions, the window
+(usually window 1), and the key stroke or colon command. Because each new
+image form an image template does not restart the cursor command file the
+commands would have to be repeated for each image in the list. Thus, a CL
+loop calling the
+task each time with only one image is preferable. Besides redirecting
+the cursor input from a command file we must also redirect the standard
+input for the response to the database save query, the standard output
+to discard the status line information, and, possibly, the graphics
+to a metacode file which can then be reviewed later. The following
+steps indicate what is to be done.
+.IP (1)
+Prepare a file containing the images to be measured (one per line).
+This can usually be done using the sections command to expand a template
+and directing the output into a file.
+.IP (2)
+Prepare the a cursor command file (let's call it cmdfile) containing the
+following two lines.
+.nf
+ 1 1 1 y
+ 1 1 1 :fe positions.dat
+.fi
+.IP (3)
+Enter the following commands.
+.nf
+ list="file"
+ while (fscan (list, s1) != EOF) {
+ print ("no") | identify (s1, maxfeatures=2, cursor="cmdfile",
+ >"dev$null", >G "plotfile")
+ }
+.fi
+.LP
+Note that these commands could be put in a CL script and executed using the
+command
+
+ on> cl <script.cl
+
+.PP
+The commands do the following. The first command initializes the image list
+for the loop. The second command is the loop to be run until the end of
+the image file is reached. The command in the loop directs the string
+"no" to the standard input of identify which will be the response to the
+database save query. The identify command uses the image name obtained
+from the list by the fscan procedure, sets the maximum number of features
+to be found to be 2 (this can be set using \fBeparam\fR instead), the cursor
+input is taken from the cursor command file, the standard output is
+discarded to the null device, and the STDGRAPH output is redirected to
+a plot file. If the plot file redirection is not used then the graphs
+will appear on the specified graphics device (usually the graphics terminal).
+The plot file can then be disposed of using the \fBgkimosaic\fR task to either
+the graphics terminal or a hardcopy device.
diff --git a/noao/onedspec/doc/sys/sensfunc.ms b/noao/onedspec/doc/sys/sensfunc.ms
new file mode 100644
index 00000000..67b6532d
--- /dev/null
+++ b/noao/onedspec/doc/sys/sensfunc.ms
@@ -0,0 +1,83 @@
+.EQ
+delim $$
+.EN
+.OM
+.TO
+IRAF ONEDSPEC Users
+.FR
+Frank Valdes
+.SU
+SENSFUNC Corrections
+.LP
+This memorandum describes the meaning of the corrections
+computed by the \fBonedspec\fR task \fBsensfunc\fR.
+The basic equation is
+
+.EQ (1)
+I( lambda )~=~I sub obs ( lambda )~10 sup {0.4~(s( lambda )~+
+~A~e( lambda )~+~roman {fudge~terms})}
+.EN
+
+where $I sub obs$ is the observed spectrum corrected to counts per second,
+$I$ is the flux calibrated spectrum, $s( lambda )$ is the sensitivity
+correction needed to produce
+flux calibrated intensities, $A$ is the air mass at the time of the
+observation, $e( lambda )$ is a standard extinction function, and,
+finally, additional terms appropriately called \fIfudge\fR terms. Expressed
+as a magnitude correction this equation is
+
+.EQ (2)
+DELTA m( lambda )~=s( lambda )~+~A~e( lambda )~+~roman {fudge~terms}
+.EN
+
+In \fBsensfunc\fR the standard extinction function is applied so that ideally
+the $DELTA m$ curves (defining the sensitivity function) obtained from
+observations of different stars and at different air masses are identical.
+However, at times this is not the case because the observations were taken
+through non-constant or nonstandard extinction.
+
+There are two types of fudge terms used in \fBsensfunc\fR, called \fIfudge\fR
+and \fIgrey\fR. The \fIfudge\fR correction is a separate constant,
+independent of wavelength or air mass, applied to each observation to shift
+the sensitivity curves to the same level on average. This is done to
+determine the shape of the sensitivity curve only.
+The fudge correction for each observation is obtained by determining
+the average magnitude shift over all wavelenths relative to the observation
+with the smallest sensitivity correction. A composite sensitivity curve
+is then determined from the average of all the fudged observations.
+The fudge terms are not incorporated in the sensitivity or extinction
+corrections applied to calibrate the spectra. Thus, after applying the
+sensitivity and extinction corrections to the standard star spectra there
+will be absolute flux scale errors due to the observing conditions.
+
+If the observer believes that there is an actual calibratible error in
+the standard extinction then \fBsensfunc\fR can be used to determine a
+correction which is a linear function of the air mass. This is done by
+relating the fudge values (the magnitude shifts needed to bring observations
+to the same sensitivity level) to the air mass of the observations.
+The \fIgrey\fR term is obtained by the least square fit to
+
+.EQ (3)
+f sub i~=~G~DELTA A sub i~=~G~A sub i~+~C
+.EN
+
+where the $f sub i$ are the fudge values relative to the observation with
+the smallest sensitivity correction and the $DELTA A sub i$ are the
+air mass differences relative to this same observation. The slope constant
+$G$ is what is refered to as the \fIgrey\fR term. The constant term,
+related to the air mass of the reference observation to which the other
+spectra are shifted, is absorbed in the sensitivity function.
+The modified equation (2) is
+
+.EQ (4)
+DELTA m( lambda )~=~s ( lambda ) + A~(e( lambda )~+~G)
+.EN
+
+It is important to realize that equation (3) can lead to erroneous results
+if there is no real relation to the air mass or the air mass range is
+too small. In other words applying the grey term correction will produce
+some number for $G$ but it may be worse than no correction. A plot of
+the individual fudge constants, $f sub i$, and the air mass or
+air mass differences would be useful to evaluate the validity of the
+grey correction. The actual magnitude of the correction is not $G$
+but $DELTA A~G$ where $DELTA A$ is the range of observed air mass.
diff --git a/noao/onedspec/doc/sys/specwcs.ms b/noao/onedspec/doc/sys/specwcs.ms
new file mode 100644
index 00000000..a9d90a41
--- /dev/null
+++ b/noao/onedspec/doc/sys/specwcs.ms
@@ -0,0 +1,612 @@
+.EQ
+delim $$
+gsize 10
+.EN
+.nr PS 11
+.nr VS 13
+.de V1
+.ft CW
+.ps -2
+.nf
+..
+.de V2
+.fi
+.ft R
+.ps +2
+..
+.ND March 1993
+.TL
+The IRAF/NOAO Spectral World Coordinate Systems
+.AU
+Francisco Valdes
+.AI
+IRAF Group - Central Computer Services
+.K2
+.DY
+
+.AB
+The image formats and world coordinate systems for dispersion calibrated
+spectra used in the IRAF/NOAO spectroscopy packages are described; in
+particular, the image header keywords defining the coordinates are given.
+These keywords appear both as part of the IRAF image structure and map
+directly to FITS format. The types of spectra include multidimensional
+images with one or more spatial axes and a linear or log-linear dispersion
+axis and special \fIequispec\fR and \fImultispec\fR formats having multiple
+independent one dimensional spectra in a single multidimensional image.
+The \fImultispec\fR format also includes general nonlinear dispersion
+coordinate systems using polynomial, spline, sampled table, and look-up
+table functions.
+.AE
+
+.NH
+Types of Spectral Data
+.LP
+Spectra are stored as one, two, or three dimensional images with one axis
+being the dispersion axis. A pixel value is the flux over
+some interval of wavelength and position. The simplest example of a
+spectrum is a one dimensional image which has pixel values as a
+function of wavelength.
+.LP
+There are two types of higher dimensional spectral image formats. One type
+has spatial axes for the other dimensions and the dispersion axis may be
+along any of the image axes. Typically this type of format is used for
+long slit (two dimensional) and Fabry-Perot (three dimensional) spectra.
+This type of spectra is referred to as \fIspatial\fR spectra and the
+world coordinate system (WCS) format is called \fIndspec\fR.
+The details of the world coordinate systems are discussed later.
+.LP
+The second type of higher dimensional spectral image consists of multiple,
+independent, one dimensional spectra stored in the higher dimensions with
+the first image axis being the dispersion axis; i.e. each line is a
+spectrum. This format allows associating many spectra and related
+parameters in a single data object. This type of spectra is referred to
+as \fImultispec\fR and the there are two coordinate system formats,
+\fIequispec\fR and \fImultispec\fR. The \fIequispec\fR format applies
+to the common case where all spectra have the same linear dispersion
+relation. The \fImultispec\fR format applies to the general case of spectra
+with differing dispersion relations or non-linear dispersion functions.
+These multi-spectrum formats are important since maintaining large numbers
+of spectra as individual one dimensional images is very unwieldy for the
+user and inefficient for the software.
+.LP
+Examples of multispec spectral images are spectra extracted from a
+multi-fiber or multi-aperture spectrograph or orders from an echelle
+spectrum. The second axis is some arbitrary indexing of the spectra,
+called \fIapertures\fR, and the third dimension is used for
+associated quantities. The IRAF \fBapextract\fR package may produce
+multiple spectra from a CCD image in successive image lines with an
+optimally weighted spectrum, a simple aperture sum spectrum, a background
+spectrum, and sigma spectrum as the associated quantities along the third
+dimension of the image.
+.LP
+Many \fBonedspec\fR package tasks which are designed to operate on
+individual one dimensional spectra may operate on spatial spectra by
+summing a number of neighboring spectra across the dispersion axis. This
+eliminates the need to "extract" one dimensional spectra from the natural
+format of this type of data in order to use tasks oriented towards the
+display and analysis of one dimensional spectra. The dispersion axis is
+either given in the image header by the keyword DISPAXIS or the package
+\fIdispaxis\fR parameter. The summing factors across the
+dispersion are specified by the \fInsum\fR package parameter.
+.LP
+One dimensional spectra, whether from multispec spatial spectra, have
+several associated quantities which may appear in the image header as part
+of the coordinate system description. The primary identification of a
+spectrum is an integer aperture number. This number must be unique within
+a single image. There is also an integer beam number used for various
+purposes such as discriminating object, sky, and arc spectra in
+multi-fiber/multi-aperture data or to identifying the order number in
+echelle data. For spectra summed from spatial spectra the aperture number
+is the central line, column, or band. In 3D images the aperture index
+wraps around the lowest non-dispersion axis. Since most one dimensional
+spectra are derived from an integration over one or more spatial axes, two
+additional aperture parameters record the aperture limits. These limits
+refer to the original pixel limits along the spatial axis. This
+information is primarily for record keeping but in some cases it is used
+for spatial interpolation during dispersion calibration. These values are
+set either by the \fBapextract\fR tasks or when summing neighboring vectors
+in spatial spectra.
+.LP
+An important task to be aware of for manipulating spectra between image
+formats is \fBscopy\fR. This task allows selecting spectra from multispec
+images and grouping them in various ways and also "extracts" apertures from
+long slit and 3D spectra simply and without resort to the more general
+\fBapextract\fR package.
+.NH
+World Coordinate Systems
+.LP
+IRAF images have three types of coordinate systems. The pixel array
+coordinates of an image or image section, i.e. the lines and
+columns, are called the \fIlogical\fR coordinates. The logical coordinates of
+individual pixels change as sections of the image are used or extracted.
+Pixel coordinates are tied to the data, i.e. are fixed to features
+in the image, are called \fIphysical\fR coordinates. Initially the logical
+and physical coordinates are the equivalent but differ when image sections
+or other tasks which modify the sampling of the pixels are applied.
+.LP
+The last type of coordinate system is called the \fIworld\fR coordinate
+system. Like the physical coordinates, the world coordinates are tied to
+the features in the image and remain unchanged when sections of the image
+are used or extracted. If a world coordinate system is not defined for an
+image, the physical coordinate system is considered to be the world
+coordinate system. In spectral images the world coordinate system includes
+dispersion coordinates such as wavelengths. In many tasks outside the
+spectroscopy packages, for example the \fBplot\fR, \fBtv\fR and
+\fBimages\fR packages, one may select the type of coordinate system to be
+used. To make plots and get coordinates in dispersion units for spectra
+with these tasks one selects the "world" system. The spectral tasks always
+use world coordinates.
+.LP
+The coordinate systems are defined in the image headers using a set of
+reserved keywords which are set, changed, and updated by various tasks.
+Some of the keywords consist of simple single values following the FITS
+convention. Others, the WAT keywords, encode long strings of information,
+one for each coordinate axis and one applying to all axes, into a set of
+sequential keywords. The values of these keywords must then be pasted
+together to recover the string. The long strings contain multiple pieces
+called WCS \fIattributes\fR. In general the WCS keywords should be left to
+IRAF tasks to modify. However, if one wants modify them directly some
+tasks which may be used are \fBhedit\fR, \fBhfix\fR, \fBwcsedit\fR,
+\fBwcsreset\fR, \fBspecshift\fR, \fBdopcor\fR, and \fBsapertures\fR. The
+first two are useful for the simple keywords, the two "wcs" tasks are
+useful for the linear ndspec and equispec formats, the next two are for the
+common cases of shifting the coordinate zero point or applying a doppler
+correction, and the last one is the one to use for the more complex
+multispec format attributes.
+.NH
+Physical Coordinate System
+.LP
+The physical coordinate system is used by the spectral tasks when there is
+no dispersion coordinate information (such as before dispersion
+calibration), to map the physical dispersion axis to the logical dispersion
+axis, and in the multispec world coordinate system dispersion functions
+which are defined in terms of physical coordinates.
+.LP
+The transformation between logical and physical coordinates is defined by
+the header keywords LTVi, LTMi_j (where i and j are axis numbers) through
+the vector equation
+
+.EQ I
+ l vec~=~|m| cdot p vec + v vec
+.EN
+
+where $l vec$ is a logical coordinate vector, $p vec$ is a physical
+coordinate vector, $v vec$ is the origin translation vector specified by
+the LTV keywords and $|m|$ is the scale/rotation matrix
+specified by the LTM keywords. For spectra rotation terms (nondiagonal
+matrix elements) generally do not make sense (in fact many tasks will not
+work if there is a rotation) so the transformations along each axis are
+given by the linear equation
+
+.EQ I
+ l sub i~=~LTMi_i cdot p sub i + LTVi.
+.EN
+
+If all the LTM/LTV keywords are missing they are assumed to have zero
+values except that the diagonal matrix terms, LTMi_i, are assumed to be 1.
+Note that if some of the keywords are present then a missing LTMi_i will
+take the value zero which generally causes an arithmetic or matrix
+inversion error in the IRAF tasks.
+.LP
+The dimensional mapping between logical and physical axes is given by the
+keywords WCSDIM and WAXMAP01. The WCSDIM keyword gives the dimensionality
+of the physical and world coordinate system. There must be coordinate
+information for that many axes in the header (though some may be missing
+and take their default values). If the WCSDIM keyword is missing it is
+assumed to be the same as the logical image dimensionality.
+.LP
+The syntax of the WAXMAP keyword are pairs of integer values,
+one for each physical axis. The first number of each pair indicates which
+current \fIlogical\fR axis corresponds to the original \fIphysical\fR axis
+(in order) or zero if that axis is missing. When the first number is zero
+the second number gives the offset to the element of the original axis
+which is missing. As an example consider a three dimensional image in
+which the second plane is extracted (an IRAF image section of [*,2,*]).
+The keyword would then appear as WAXMAP01 = '1 0 0 1 2 0'. If this keyword
+is missing the mapping is 1:1; i.e. the dimensionality and order of the
+axes are the same.
+.LP
+The dimensional mapping is important because the dispersion axis for
+the nspec spatial spectra as specified by the DISPAXIS keyword or task
+parameter, or the axis definitions for the equispec and or multispec
+formats are always in terms of the original physical axes.
+.NH
+Linear Spectral World Coordinate Systems
+.LP
+When there is a linear or logarithmic relation between pixels and
+dispersion coordinates which is the same for all spectra the WCS header
+format is simple and uses the FITS convention (with the CD matrix keywords
+proposed by Hanisch and Wells 1992) for the logical pixel to world
+coordinate transformation. This format applies to one, two, and three
+dimensional data. The higher dimensional data may have either linear
+spatial axes or the equispec format where each one dimensional spectrum
+stored along the lines of the image has the same dispersion.
+.LP
+The FITS image header keywords describing the spectral world coordinates
+are CTYPEi, CRPIXi, CRVALi, and CDi_j where i and j are axis numbers. As
+with the physical coordinate transformation the nondiagonal or rotation
+terms are not expected in the spectral WCS and may cause problems if they
+are not zero. The CTYPEi keywords will have the value LINEAR to identify
+the type of of coordinate system. The transformation between dispersion
+coordinate, $w sub i$, and logical pixel coordinate, $l sub i$, along axis i is given by
+
+.EQ I
+ w sub i~=~CRVALi + CDi_i cdot (l sub i - CRPIXi)
+.EN
+
+If the keywords are missing then the values are assumed to be zero except
+for the diagonal elements of the scale/rotation matrix, the CDi_i, which
+are assumed to be 1. If only some of the keywords are present then any
+missing CDi_i keywords will take the value 0 which will cause IRAF tasks to
+fail with arithmetic or matrix inversion errors. If the CTYPEi keyword is
+missing it is assumed to be "LINEAR".
+.LP
+If the pixel sampling is logarithmic in the dispersion coordinate, as
+required for radial velocity cross-correlations, the WCS coordinate values
+are logarithmic and $w sub i$ (above) is the logarithm of the dispersion
+coordinate. The spectral tasks (though not other tasks) will recognize
+this case and automatically apply the anti-log. The two types of pixel
+sampling are identified by the value of the keyword DC-FLAG. A value of 0
+defines a linear sampling of the dispersion and a value of 1 defines a
+logarithmic sampling of the dispersion. Thus, in all cases the spectral
+tasks will display and analyze the spectra in the same dispersion units
+regardless of the pixel sampling.
+.LP
+Other keywords which may be present are DISPAXIS for 2 and 3 dimensional
+spatial spectra, and the WCS attributes "system", "wtype", "label", and
+"units". The system attribute will usually have the value "world" for
+spatial spectra and "equispec" for equispec spectra. The wtype attribute
+will have the value "linear". Currently the label will be either "Pixel"
+or "Wavelength" and the units will be "Angstroms" for dispersion corrected
+spectra. In the future there will be more generality in the units
+for dispersion calibrated spectra.
+.LP
+Figure 1 shows the WCS keywords for a two dimensional long slit spectrum.
+The coordinate system is defined to be a generic "world" system and the
+wtype attributes and CTYPE keywords define the axes to be linear. The
+other attributes define a label and unit for the second axis, which is the
+dispersion axis as indicated by the DISPAXIS keyword. The LTM/LTV keywords
+in this example show that a subsection of the original image has been
+extracted with a factor of 2 block averaging along the dispersion axis.
+The dispersion coordinates are given in terms of the \fIlogical\fR pixel
+coordinates by the FITS keywords as defined previously.
+
+.DS
+.ce
+Figure 1: Long Slit Spectrum
+
+.V1
+WAT0_001= 'system=world'
+WAT1_001= 'wtype=linear'
+WAT2_001= 'wtype=linear label=Wavelength units=Angstroms'
+WCSDIM = 2
+DISPAXIS= 2
+DC-FLAG = 0
+
+CTYPE1 = 'LINEAR '
+LTV1 = -10.
+LTM1_1 = 1.
+CRPIX1 = -9.
+CRVAL1 = 19.5743865966797
+CD1_1 = 1.01503419876099
+
+CTYPE2 = 'LINEAR '
+LTV2 = -49.5
+LTM2_2 = 0.5
+CRPIX2 = -49.
+CRVAL2 = 4204.462890625
+CD2_2 = 12.3337936401367
+.V2
+.DE
+
+Figure 2 shows the WCS keywords for a three dimensional image where each
+line is an independent spectrum or associated data but where all spectra
+have the same linear dispersion. This type of coordinate system has the
+system name "equispec". The ancillary information about each aperture is
+found in the APNUM keywords. These give the aperture number, beam number,
+and extraction limits. In this example the LTM/LTV keywords have their
+default values; i.e. the logical and physical coordinates are the same.
+
+.DS
+.ce
+Figure 2: Equispec Spectrum
+
+.V1
+WAT0_001= 'system=equispec'
+WAT1_001= 'wtype=linear label=Wavelength units=Angstroms'
+WAT2_001= 'wtype=linear'
+WAT3_001= 'wtype=linear'
+WCSDIM = 3
+DC-FLAG = 0
+APNUM1 = '41 3 7.37 13.48'
+APNUM2 = '15 1 28.04 34.15'
+APNUM3 = '33 2 43.20 49.32'
+
+CTYPE1 = 'LINEAR '
+LTM1_1 = 1.
+CRPIX1 = 1.
+CRVAL1 = 4204.463
+CD1_1 = 6.16689700000001
+
+CTYPE2 = 'LINEAR '
+LTM2_2 = 1.
+CD2_2 = 1.
+
+CTYPE3 = 'LINEAR '
+LTM3_3 = 1.
+CD3_3 = 1.
+.V2
+.DE
+.NH
+Multispec Spectral World Coordinate System
+.LP
+The \fImultispec\fR spectral world coordinate system applies only to one
+dimensional spectra; i.e. there is no analog for the spatial type spectra.
+It is used either when there are multiple 1D spectra with differing
+dispersion functions in a single image or when the dispersion functions are
+nonlinear.
+.LP
+The multispec coordinate system is always two dimensional though there may
+be an independent third axis. The two axes are coupled and they both have
+axis type "multispec". When the image is one dimensional the physical line
+is given by the dimensional reduction keyword WAXMAP. The second, line
+axis, has world coordinates of aperture number. The aperture numbers are
+integer values and need not be in any particular order but do need to be
+unique. This aspect of the WCS is not of particular user interest but
+applications use the inverse world to physical transformation to select a
+spectrum line given a specified aperture.
+.LP
+The dispersion functions are specified by attribute strings with the
+identifier \fIspecN\fR where N is the \fIphysical\fR image line. The
+attribute strings contain a series of numeric fields. The fields are
+indicated symbolically as follows.
+
+.EQ I
+ specN~=~ap~beam~dtype~w1~dw~nw~z~aplow~aphigh~[functions sub i ]
+.EN
+
+where there are zero or more functions having the following fields,
+
+.EQ I
+ function sub i~=~ wt sub i~w0 sub i~ftype sub i~[parameters]~[coefficients]
+.EN
+
+The first nine fields in the attribute are common to all the dispersion
+functions. The first field of the WCS attribute is the aperture number,
+the second field is the beam number, and the third field is the dispersion
+type with the same function as DC-FLAG in the \fInspec\fR and
+\fIequispec\fR formats. A value of -1 indicates the coordinates are not
+dispersion coordinates (the spectrum is not dispersion calibrated), a value
+of 0 indicates linear dispersion sampling, a value of 1 indicates
+log-linear dispersion sampling, and a value of 2 indicates a nonlinear
+dispersion.
+.LP
+The next two fields are the dispersion coordinate of the first
+\fIphysical\fR pixel and the average dispersion interval per \fIphysical\fR
+pixel. For linear and log-linear dispersion types the dispersion
+parameters are exact while for the nonlinear dispersion functions they are
+approximate. The next field is the number of valid pixels, hence it is
+possible to have spectra with varying lengths in the same image. In that
+case the image is as big as the biggest spectrum and the number of pixels
+selects the actual data in each image line. The next (seventh) field is a
+doppler factor. This doppler factor is applied to all dispersion
+coordinates by multiplying by $1/(1+z)$ (assuming wavelength dispersion
+units). Thus a value of 0 is no doppler correction. The last two fields
+are extraction aperture limits as discussed previously.
+.LP
+Following these fields are zero or more function descriptions. For linear
+or log-linear dispersion coordinate systems there are no function fields.
+For the nonlinear dispersion systems the function fields specify a weight,
+a zero point offset, the type of dispersion function, and the parameters
+and coefficients describing it. The function type codes, $ftype sub i$,
+are 1 for a chebyshev polynomial, 2 for a legendre polynomial, 3 for a
+cubic spline, 4 for a linear spline, 5 for a pixel coordinate array, and 6
+for a sampled coordinate array. The number of fields before the next
+function and the number of functions are determined from the parameters of
+the preceding function until the end of the attribute is reached.
+.LP
+The equation below shows how the final wavelength is computed based on
+the $nfunc$ individual dispersion functions $W sub i (p)$. Note that this
+is completely general in that different function types may be combined.
+However, in practice when multiple functions are used they are generally of
+the same type and represent a calibration before and after the actual
+object observation with the weights based on the relative time difference
+between the calibration dispersion functions and the object observation.
+
+.EQ I
+w~=~sum from i=1 to nfunc {wt sub i cdot (w0 sub i + W sub i (p)) / (1 + z)}
+.EN
+
+The multispec coordinate systems define a transformation between physical
+pixel, $p$, and world coordinates, $w$. Generally there is an intermediate
+coordinate system used. The following equations define these coordinates.
+The first one shows the transformation between logical, $l$, and physical,
+$p$, coordinates based on the LTM/LTV keywords. The polynomial functions
+are defined in terms of a normalized coordinate, $n$, as shown in the
+second equation. The normalized coordinates run between -1 and 1 over the
+range of physical coordinates, $p sub min$ and $p sub max$ which are
+parameters of the function, upon which the coefficients were defined. The
+spline functions map the physical range into an index over the number of
+evenly divided spline pieces, $npieces$, which is a parameter of the
+function. This mapping is shown in the third and fourth equations where
+$s$ is the continuous spline coordinate and $j$ is the nearest integer less
+than or equal to $s$.
+
+.EQ I
+ p mark~=~(l - LTV1) / LTM1_1
+.EN
+.EQ I
+ n lineup~=~(p - p sub middle ) / (2 cdot p sub range )
+.EN
+.EQ I
+ lineup~=~(p - (p sub max + p sub min )/2) / (2 cdot (p sub max - p sub min ))
+.EN
+.EQ I
+ s lineup~=~(p - p sub min ) / (p sub max - p sub min ) cdot npieces
+.EN
+.EQ I
+ j lineup~=~roman "int" (s)
+.EN
+.NH 2
+Linear and Log Linear Dispersion Function
+.LP
+The linear and log-linear dispersion functions are described by a
+wavelength at the first \fIphysical\fR pixel and a wavelength increment per
+\fIphysical\fR pixel. A doppler correction may also be applied. The
+equations below show the two forms. Note that the coordinates returned are
+always wavelength even though the pixel sampling and the dispersion
+parameters may be log-linear.
+
+.EQ I
+ w mark~=~(w1 + dw cdot (p - 1)) / (1 + z)
+.EN
+.EQ I
+ w lineup~=~10 sup {(w1 + dw cdot (p - 1)) / (1 + z)}
+.EN
+
+Figure 3 shows an example from a multispec image with
+independent linear dispersion coordinates. This is a linearized echelle
+spectrum where each order (identified by the beam number) is stored as a
+separate image line.
+
+.DS
+.ce
+Figure 3: Echelle Spectrum with Linear Dispersion Function
+
+.V1
+WAT0_001= 'system=multispec'
+WAT1_001= 'wtype=multispec label=Wavelength units=Angstroms'
+WAT2_001= 'wtype=multispec spec1 = "1 113 0 4955.44287109375 0.05...
+WAT2_002= '5 256 0. 23.22 31.27" spec2 = "2 112 0 4999.0810546875...
+WAT2_003= '58854293 256 0. 46.09 58.44" spec3 = "3 111 0 5043.505...
+WAT2_004= '928358078002 256 0. 69.28 77.89"
+WCSDIM = 2
+
+CTYPE1 = 'MULTISPE'
+LTM1_1 = 1.
+CD1_1 = 1.
+
+CTYPE2 = 'MULTISPE'
+LTM2_2 = 1.
+CD2_2 = 1.
+.V2
+.DE
+.NH 2
+Chebyshev Polynomial Dispersion Function
+.LP
+The parameters for the chebyshev polynomial dispersion function are the
+$order$ (number of coefficients) and the normalizing range of physical
+coordinates, $p sub min$ and $p sub max$, over which the function is
+defined and which are used to compute $n$. Following the parameters are
+the $order$ coefficients, $c sub i$. The equation below shows how to
+evaluate the function using an iterative definition where $x sub 1 = 1$,
+$x sub 2 = n$, and $x sub i = 2 cdot n cdot x sub {i-1} - x sub {i-2}$.
+
+.EQ I
+ W~=~sum from i=1 to order {c sub i cdot x sub i}
+.EN
+.NH 2
+Legendre Polynomial Dispersion Function
+.LP
+The parameters for the legendre polynomial dispersion function are the
+order (number of coefficients) and the normalizing range of physical
+coordinates, pmin and pmax, over which the function is defined
+and which are used to compute n. Following the parameters are the
+order coefficients, c sub i. The equation below shows how to evaluate the
+function using an iterative definition where $x sub 1 = 1$, $x sub 2 = n$, and
+$x sub i = ((2i-3) cdot n cdot x sub {i-1} - (i-2) cdot x sub {i-2}) / (i-1)$.
+
+.EQ I
+ W~=~sum from i=1 to order {c sub i cdot x sub i}
+.EN
+.LP
+Figure 4 shows an example from a multispec image with independent nonlinear
+dispersion coordinates. This is again from an echelle spectrum. Note that
+the IRAF \fBechelle\fR package determines a two dimensional dispersion
+function, in this case a bidimensional legendre polynomial, with the
+independent variables being the order number and the extracted pixel
+coordinate. To assign and store this function in the image is simply a
+matter of collapsing the two dimensional dispersion function by fixing the
+order number and combining all the terms with the same order.
+
+.DS
+.ce
+Figure 4: Echelle Spectrum with Legendre Polynomial Function
+
+.V1
+WAT0_001= 'system=multispec'
+WAT1_001= 'wtype=multispec label=Wavelength units=Angstroms'
+WAT2_001= 'wtype=multispec spec1 = "1 113 2 4955.442888635351 0.05...
+WAT2_002= '83 256 0. 23.22 31.27 1. 0. 2 4 1. 256. 4963.0163112090...
+WAT2_003= '976664 -0.3191636898579552 -0.8169352858733255" spec2 =...
+WAT2_004= '9.081188912082 0.06387049476832223 256 0. 46.09 58.44 1...
+WAT2_005= '56. 5007.401409453303 8.555959076467951 -0.176732458267...
+WAT2_006= '09935064388" spec3 = "3 111 2 5043.505764869474 0.07097...
+WAT2_007= '256 0. 69.28 77.89 1. 0. 2 4 1. 256. 5052.586239197408 ...
+WAT2_008= '271 -0.03173489817897474 -7.190562320405975E-4"
+WCSDIM = 2
+
+CTYPE1 = 'MULTISPE'
+LTM1_1 = 1.
+CD1_1 = 1.
+
+CTYPE2 = 'MULTISPE'
+LTM2_2 = 1.
+CD2_2 = 1.
+.V2
+.DE
+.NH 2
+Linear Spline Dispersion Function
+.LP
+The parameters for the linear spline dispersion function are the number of
+spline pieces, $npieces$, and the range of physical coordinates, $p sub min$
+and $p sub max$, over which the function is defined and which are used to
+compute the spline coordinate $s$. Following the parameters are the
+$npieces+1$ coefficients, $c sub i$. The two coefficients used in a linear
+combination are selected based on the spline coordinate, where $a$ and $b$
+are the fractions of the interval in the spline piece between the spline
+knots, $a=(j+1)-s$, $b=s-j$, and $x sub 0 =a$, and $x sub 1 =b$.
+
+.EQ I
+ W~=~sum from i=0 to 1 {c sub (i+j) cdot x sub i}
+.EN
+.NH 2
+Cubic Spline Dispersion Function
+.LP
+The parameters for the cubic spline dispersion function are the number of
+spline pieces, $npieces$, and the range of physical coordinates, $p sub min$
+and $p sub max$, over which the function is defined and which are used
+to compute the spline coordinate $s$. Following the parameters are the
+$npieces+3$ coefficients, $c sub i$. The four coefficients used are
+selected based on the spline coordinate. The fractions of the interval
+between the integer spline knots are given by $a$ and $b$, $a=(j+1)-s$,
+b=$s-j$, and $x sub 0 =a sup 3$, $x sub 1 =(1+3 cdot a cdot (1+a cdot b))$,
+$x sub 2 =(1+3 cdot b cdot (1+a cdot b))$, and $x sub 3 =b sup 3$.
+
+.EQ I
+ W~=~sum from i=0 to 3 {c sub (i+j) cdot x sub i}
+.EN
+.NH 2
+Pixel Array Dispersion Function
+.LP
+The parameters for the pixel array dispersion function consists of just the
+number of coordinates $ncoords$. Following this are the wavelengths at
+integer physical pixel coordinates starting with 1. To evaluate a
+wavelength at some physical coordinate, not necessarily an integer, a
+linear interpolation is used between the nearest integer physical coordinates
+and the desired physical coordinate where $a$ and $b$ are the usual
+fractional intervals $k= roman "int" (p)$, $a=(k+1)-p$, $b=p-k$,
+and $x sub 0 =a$, and $x sub 1 =b$.
+
+.EQ I
+ W~=~sum from i=0 to 1 {c sub (i+j) cdot x sub i}
+.EN
+.NH 2
+Sampled Array Dispersion Function
+.LP
+The parameters for the sampled array dispersion function consists of
+the number of coordinate pairs, $ncoords$, and a dummy field.
+Following these are the physical coordinate and wavelength pairs
+which are in increasing order. The nearest physical coordinates to the
+desired physical coordinate are located and a linear interpolation
+is computed between the two sample points.