aboutsummaryrefslogtreecommitdiff
path: root/pkg/proto/vol/src/doc
diff options
context:
space:
mode:
authorJoe Hunkeler <jhunkeler@gmail.com>2015-08-11 16:51:37 -0400
committerJoe Hunkeler <jhunkeler@gmail.com>2015-08-11 16:51:37 -0400
commit40e5a5811c6ffce9b0974e93cdd927cbcf60c157 (patch)
tree4464880c571602d54f6ae114729bf62a89518057 /pkg/proto/vol/src/doc
downloadiraf-osx-40e5a5811c6ffce9b0974e93cdd927cbcf60c157.tar.gz
Repatch (from linux) of OSX IRAF
Diffstat (limited to 'pkg/proto/vol/src/doc')
-rw-r--r--pkg/proto/vol/src/doc/concept.hlp177
-rw-r--r--pkg/proto/vol/src/doc/i2sun.hlp152
-rw-r--r--pkg/proto/vol/src/doc/im3dtran.hlp85
-rw-r--r--pkg/proto/vol/src/doc/imjoin.hlp76
-rw-r--r--pkg/proto/vol/src/doc/proj.hlp139
-rw-r--r--pkg/proto/vol/src/doc/pvol.hlp398
-rw-r--r--pkg/proto/vol/src/doc/volumes.hlp56
7 files changed, 1083 insertions, 0 deletions
diff --git a/pkg/proto/vol/src/doc/concept.hlp b/pkg/proto/vol/src/doc/concept.hlp
new file mode 100644
index 00000000..26a814b9
--- /dev/null
+++ b/pkg/proto/vol/src/doc/concept.hlp
@@ -0,0 +1,177 @@
+.help volumes
+
+[OUT OF DATE (Jan 89); this is an original pre-design document]
+
+.ce
+Conceptual Model for 2D Projections of Rotating 3D Images
+
+Consider the problem of visualizing a volume containing emissive and
+absorptive material. If we had genuine 3D display tools, we could imagine
+something like a translucent 3D image display that we could hold in our
+hand, peer into, and rotate around at will to study the spatial distribution
+of materials inside the volume.
+
+Lacking a 3D display device, we can resort to 2D projections of the interior
+of the volume. In order to render absorptive material, we need a light
+source behind the volume; this light gets attenuated by absorption as it
+passes through the volume toward the projection plane. In the general case
+light emitted within the volume contributes positively to the projected
+light intensity, but it also gets attenuated by absorbing material between
+it and the projection plane. At this point the projection plane has a range
+of intensities representing the combined absorption and emission of material
+along columns through the volume. But looking at a single 2D projection,
+there would be no way to determine how deep in the original volume a particular
+emitting or absorbing region lay. One way around this is to cause the volume
+to rotate, making a series of 2D projections. Playing the projections back
+as a movie gives the appearance of seeing inside a translucent rotating
+volume.
+
+Modelling the full physics of light transmission, absorption, refraction,
+etc. with arbitrary projective geometries would be quite computationally
+intensive and could rival many supercomputer simulations. However, it is
+possible to constrain the model such that an effective display can be
+generated allowing the viewer to grasp the essential nature of the spatial
+relationships among the volume data in reasonable computational time. This
+is called volume visualization, which can include a range of display
+techniques approximating the actual physics to varying extents. There is
+some debate whether visualization problems can best be attacked by simplified
+direct physical models or by different models, such as ones that might better
+enhance the \fBperception\fR of depth. We will stick with direct physical
+models here, though simplified for computer performance reasons.
+
+For computational purposes we will constrain the projection to be orthogonal,
+i.e. the light source is at infinity, so the projection rays are all parallel.
+With the light source at infinity behind the volume (a datacube), we need not
+model reflection at all. We will also ignore refraction (and certainly
+diffraction effects).
+
+We can now determine a pixel intensity on the output plane by starting
+at the rear of the column of voxels (volume elements) that project from
+the datacube onto that pixel. At each successive voxel along that column
+we will attenuate the light we started with by absorption, and add to it
+any light added by emission. If we consider emission (voxel intensity)
+alone, the projection would just be the sum of the contributing intensities.
+Absorption alone would simply decrease the remaining transmitted light
+proportionally to the opacity of each of the voxels along the column.
+Since we are combining the effects of absorption and emission, the ratio
+of the intensity of the original incident light to that of the interior
+voxels is important, so we will need a beginning intensity.
+
+The opacities have a physical meaning in the model. However, we are more
+interested here in visualizing the volume interior than in treating it as
+a pure physical model, so we add an opacity transformation function. This
+allows us to generate different views of the volume interior without having
+to modify all the raw opacity values in the datacube. For maximum flexibility
+we would like to be able to modify the opacity function interactively, e.g.
+with a mouse, but this would amount to computing the projections in real
+time and is not likely at present.
+
+.nf
+
+Let: i = projected intensity before considering the current
+ voxel
+ i' = intensity of light after passing through the current
+ voxel
+ I0 = initial light intensity (background iillumination
+ before encountering the volume)
+ Vo = opacity at the current voxel, range 0:1 with
+ 0=transparent, 1=opaque
+ Vi = intensity at the current voxel
+ f(Vo) = function of the opacity at the current voxel,
+ normalized to the range 0:1
+ g(Vi) = function of the voxel's intensity, normalized
+ to the range 0:1
+
+Then: i' = i * (1 - f(Vo)) + g(Vi)
+ [initial i = Imax, then iterate over all voxels in path]
+
+.fi
+
+We want to choose the opacity and intensity transformation functions in such
+a way that we can easily control the appearance of the final projection.
+In particular, we want to be able to adjust both the opacity and intensity
+functions to best reveal the interior details of the volume during a
+rotation sequence. For example, we might want to eliminate absorption
+"noise" so that we can see through it to details of more interest, so we
+need a lower opacity cutoff. Likewise, we would want an upper opacity
+cutoff above which all voxels would appear opaque. We will need the same
+control over intensity.
+
+.nf
+Let: o1 = lower voxel opacity cutoff
+ o2 = upper voxel opacity cutoff
+ t1 = lower transmitted intensity cutoff
+ t2 = upper transmitted intensity cutoff
+ i1 = lower voxel intensity cutoff
+ i2 = upper voxel intensity cutoff
+ Imax = output intensity maximum for int transform function
+.fi
+
+Now all we need is the form of the opacity and intensity functions between
+their input cutoffs. A linear model would seem to be useful, perhaps with
+logarithmic and exponential options later to enhance the lower or upper
+end of the range. f(Vo) is constrained to run between 0 and 1, because
+after being subtracted from 1.0 it is the intensity attenuation factor
+for the current voxel.
+
+.nf
+
+ Opacity Transformation Function f(Vo):
+
+ { Vo < o1 : 0.0 }
+ { }
+ { o1 <= Vo < o2 : (t2 - (Vo - o1)(t2 - t1)) }
+ { ( --------- ) }
+ f(Vo) = { ( (o2 - o1) ) }
+ { ------------------------- }
+ { I0 }
+ { }
+ { o2 <= Vo : 1.0 }
+
+
+backg. int. I0-|
+ |
+ t2-|------ Transmitted Intensity
+ | ` as function of opacity
+ | ` (ignoring independent
+i * (1 - f(Vo))-|..............` voxel intensity contri-
+ | . ` bution)
+ | . `
+ | . `
+ t1-| . |
+ | . |
+ +____________________________________________
+ | | |
+ o1 Vo o2
+ Voxel Opacity
+
+ ------------------------------------------------------------
+
+ Intensity Transformation Function g(Vi):
+
+ { Vi < i1 : 0.0
+ {
+ { i1 <= Vi < i2 : (Vi - i1) * Imax
+ { ---------
+ g(Vi) = { (i2 - i1)
+ {
+ { i2 <= Vi : Imax
+ {
+ {
+ {
+
+
+ |
+ |
+ Imax-| ---------------------
+ | /
+ g(Vi)-|................./
+ | / .
+ | / .
+ | / .
+ 0.0 +___________________________________________
+ | | |
+ i1 Vi i2
+ Voxel Intensity
+.fi
+
diff --git a/pkg/proto/vol/src/doc/i2sun.hlp b/pkg/proto/vol/src/doc/i2sun.hlp
new file mode 100644
index 00000000..70448d64
--- /dev/null
+++ b/pkg/proto/vol/src/doc/i2sun.hlp
@@ -0,0 +1,152 @@
+.help i2sun Oct88 local
+.ih
+NAME
+i2sun -- convert IRAF images to Sun rasterfiles
+.ih
+USAGE
+i2sun input output z1 z2
+.ih
+PARAMETERS
+.ls input
+Input image template, @file, n-dimensional image, or combination.
+.le
+.ls output
+Root template for output images, e.g. "home$ras/frame.%d".
+.le
+.ls clutfile
+Previously saved Sun rasterfile (e.g. output from IMTOOL), containing the
+color/greyscale lookup table information to be passed along to each output
+frame. Standard ones can be saved and used with any number of images (e.g.
+"pseudo.ras").
+.le
+.ls z1 = INDEF, z2 = INDEF
+Minimum and maximum pixel/voxel intensities to scale to full output
+color/greyscale range. Both are required parameters, and will apply to all
+images in the sequence.
+.le
+.ls ztrans = "linear"
+Intensity transformation on input data (linear|log|none|user).
+If "user", you must also specify \fIulutfile\fR.
+.le
+.ls ulutfile
+Name of text file containing the look up table when \fIztrans\fR = user.
+The table should contain two columns per line; column 1 contains the
+intensity, column 2 the desired greyscale output.
+.le
+.ls xsize = INDEF, ysize = INDEF
+If specified, these will be the dimensions of all output Sun rasterfiles
+in pixels. The default will be the same size as the input images (which
+could vary, though this would create a jittery movie).
+.le
+.ls xmag = 1.0, ymag = 1.0
+Another way to specify output rasterfile dimensions. These are the
+magnification factors to apply to the input image dimensions.
+.le
+.ls order = 1
+Order of the interpolator to be used for spatially interpolating the image.
+The current choices are 0 for pixel replication, and 1 for bilinear
+interpolation.
+.le
+.ls sliceaxis = 3
+Image axis from which to cut multiple slices when input image dimension is
+greater than 2. Only x-y sections are allowed, so \fIsliceaxis\fR must
+be 3 or greater.
+.le
+.ls swap = no
+Swap rasterfile bytes on output? Used when rasterfiles are being written
+to a computer with opposite byte-swapping from that of the home computer
+(e.g. between VAX and Sun).
+.le
+
+
+.ih
+DESCRIPTION
+
+Given a series of IRAF images, an intensity transformation, and a file
+containing color/greyscale lookup table information, produces one 2d image
+in Sun rasterfile format for each 2D IRAF image. This is a temporary task
+usually used as a step in creating filmloops for playback by a Sun Movie
+program.
+
+The input images may be specified as an image template ("zoom*.imh"),
+an "@" file ("@movie.list"), or as an n-dimensional image from which to
+create multiple 2d rasterfiles. If any images in a list are nD images,
+all 2d sections from the specified \fIsliceaxis\fR will be written out
+(default = band or z axis). At present, only x-y sections may be made,
+i.e. the slice axis must be axis 3 or higher.
+
+The minimum and maximum pixel/voxel intensities, z1 and z2, must be specified
+as it would be not only inefficient to calculate the full zrange of
+each image in a sequence, but would also make very jumpy movies.
+Between input intensities z1 and z2, the pixel intensities may be transformed
+according to the \fIztrans\fR parameter: "linear", "log10", "none",
+or "user".
+
+When \fIztrans\fR = "user", a look up table of intensity values and their
+corresponding greyscale levels is read from the file specified by the
+\fIulutfile\fR parameter. From this information, a piecewise linear
+look up table containing 4096 discrete values is composed. The text
+format table contains two columns per line; column 1 contains the
+intensity, column 2 the desired greyscale output. The greyscale values
+specified by the user must match those available on the output device.
+Task \fIshowcap\fR can be used to determine the range of acceptable
+greyscale levels.
+
+A color table file (\fIclutfile\fR) may be produced on a Sun workstation from
+IMTOOL (see IMTOOL manual page, R_RASTERFILE parameter and Imcopy function).
+This file may be specified to I2SUN as the \fIclutfile\fR parameter.
+Likewise, any rasterfiles previously created with
+I2SUN may be used as input clutfiles.
+
+The output rasterfile dimensions may be larger or smaller than the input
+images (see parameters \fIxsize\fR and \fIysize\fR, or \fIxmag\fR and
+\fIymag\fR). The parameter \fIorder\fR controls the mode of interpolation;
+0=pixel replication, 1=bilinear.
+
+If the output rasterfiles are being sent to a computer with opposite
+byte-swapping characteristics, set \fIswap\fR = yes (e.g., when running
+I2SUN on a VAX, with output to a Sun).
+
+
+.ih
+EXAMPLES
+
+.nf
+1. Produce a series of Sun rasterfiles in tmp$mydir/movie/,
+ using a pseudocolor color table file saved earlier, with
+ input greylevels scaled between 10 and 100.
+
+ cl> i2sun nzoom*.imh tmp$mydir/movie/frame.%d \
+ home$colors/pseudo.ras 10 100
+
+2. Make a movie through the z, or band, axis of a datacube.
+
+ cl> i2sun cube tmp$cubemovie/frame.%d 1 256
+
+3. Make a movie through the 4th, or hyper-axis of a datacube,
+ holding image band 10 constant.
+
+ cl> i2sun hypercube[*,*,10,*] tmp$movie/frame.%d 1 256 \
+ sliceaxis=4
+
+4. Run I2SUN on a VAX, with output to a Sun.
+
+ cl> i2sun @imlist sunnode!home$ras/frame.%d 1 256 swap+
+
+.fi
+
+.ih
+TIMINGS
+49 seconds (1 sec/frame) to produce 50 100*100 rasterfiles from a
+100*100*50 datacube with no magnification, on a diskless Sun-3/110
+using NFS to Eagle disks on a lightly loaded Sun-3/160 fileserver
+(load factor < 1.5).
+5 minutes for the same with a magnification factor of 2 in both x and y,
+bilinear interpolation.
+20 minutes for the same with a magnification factor of 5 in both x and y.
+.ih
+BUGS
+.ih
+SEE ALSO
+display, imtool, volumes.pvol
+.endhelp
diff --git a/pkg/proto/vol/src/doc/im3dtran.hlp b/pkg/proto/vol/src/doc/im3dtran.hlp
new file mode 100644
index 00000000..75fd85fe
--- /dev/null
+++ b/pkg/proto/vol/src/doc/im3dtran.hlp
@@ -0,0 +1,85 @@
+.help im3dtran Jan89 volumes
+.ih
+NAME
+im3dtran -- 3d image transpose, any axis to any other axis
+.ih
+USAGE
+im3dtran input output
+.ih
+PARAMETERS
+.ls input
+Input 3d image (datacube).
+.le
+.ls output
+Transposed datacube.
+.le
+.ls len_blk = 128
+Size in pixels of linear internal subraster. IM3DTRAN will try to transpose
+a subraster up to len_blk cubed at one time. Runtime is much faster with
+larger \fBlen_blk\fR, but the task may run out of memory.
+.le
+.ls new_x = 3
+New x axis = old axis (1=x, 2=y, 3=z). Default (3) replaces new x with old z.
+.le
+.ls new_y = 2
+New y axis = old axis. Default (2) is identity.
+.le
+.ls new_z = 1
+New z axis = old axis. Default (1) replaces new z with old x.
+.le
+
+.ih
+DESCRIPTION
+
+IM3DTRAN is very similar to IMAGES.IMTRANSPOSE, except that it can accomplish
+3d image transposes. In 3 dimensions, it is necessary to specify which old
+axes map to the new axes. In all cases, IM3DTRAN maps old axis element 1 to
+new axis element 1, i.e. it does not flip axes, just transposes them.
+
+If one wants to use IM3DTRAN to rotate a datacube 90 degrees in any direction,
+it is necessary to use a combination of flip and transpose (just like in the
+2d case). For example, let the original datacube be visualized with its
+origin at the lower left front when seen by the viewer, with the abscissa
+being the x axis (dim1), ordinate the y axis (dim2), and depth being the
+z axis (dim3), z increasing away from the viewer or into the datacube [this
+is a left-handed coordinate system]. One then wants to rotate the datacube
+by 90 degrees clockwise about the y axis when viewed from +y (the "top");
+this means the old z axis becomes the new x axis, and the old x axis becomes
+the new z axis, while new y remains old y. In this case the axis that must
+be flipped prior to transposition is the \fBx axis\fR; see Example 1.
+
+The parameter \fBlen_blk\fR controls how much memory is used during the
+transpose operation. \fBlen_blk\fR elements are used in each axis at a
+time, or a cube len_blk elements on a side. If \fBlen_blk\fR is too large,
+the task will abort with an "out of memory" error. If it is too small,
+the task can take a very long time to run. The maximum size of len_blk
+depends on how much memory is available at the time IM3DTRAN is run,
+and the size and datatype of the image to be transposed.
+
+.ih
+EXAMPLES
+
+.nf
+1. For an input datacube with columns = x = abscissa, lines = y = ordinate,
+ and bands = z = depth increasing away from viewer, and with the image
+ origin at the lower left front, rotate datacube 90 degrees clockwise
+ around the y axis when viewed from +y (top):
+
+ cl> im3dtran input[-*,*,*] output 3 2 1
+
+.fi
+
+.ih
+TIMINGS
+
+[Not available yet]
+
+.ih
+BUGS
+
+[Not available yet]
+
+.ih
+SEE ALSO
+pvol i2sun
+.endhelp
diff --git a/pkg/proto/vol/src/doc/imjoin.hlp b/pkg/proto/vol/src/doc/imjoin.hlp
new file mode 100644
index 00000000..6d7a59a1
--- /dev/null
+++ b/pkg/proto/vol/src/doc/imjoin.hlp
@@ -0,0 +1,76 @@
+.help imjoin Jan89 images
+.ih
+NAME
+imjoin -- join input images into output image along specified axis
+.ih
+USAGE
+imjoin input output
+.ih
+PARAMETERS
+.ls input
+Input images or @file
+.le
+.ls output
+Output joined image
+.le
+.ls joindim = 1
+Image dimension along which the input images will be joined.
+.le
+.ls outtype = ""
+Output image datatype. If not specified, defaults to highest precedence
+input image datatype.
+.le
+
+.ih
+DESCRIPTION
+
+IMJOIN concatenates a set of input images into a single output image,
+in a specified dimension only. For example, it can join a set of one
+dimensional images into a single, long one dimensional image, or a
+set of one dimensional images into a single two dimensional image.
+IMJOIN may be used to piece together datacubes into larger
+datacubes, either in x, y, or z; likewise with higher dimensional images.
+
+For joining a set of 1 or 2 dimensional images in both x and y at the same
+time, see IMMOSAIC. For stacking images of any dimension into an image
+of the next higher dimension, see IMSTACK. Although IMJOIN can also
+stack a set of images into a single higher dimensional image, IMSTACK
+is more efficient for that operation. In most cases, IMJOIN must keep
+all input images open at the same time, while IMSTACK does not (there may
+be limitations on the number of files that can be kept open at one time).
+Use IMJOIN primarily when joining a set of images along any dimension that
+is not the next higher one from that of the input images.
+
+.ih
+EXAMPLES
+
+.nf
+1. Join a list of one dimensional spectra into a single long image.
+
+ cl> imjoin @inlist output 1
+
+2. Join three datacubes along the z direction.
+
+ cl> imjoin c1,c2,c3 fullxcube 3
+
+.fi
+
+.ih
+TIMINGS
+
+Join 10 5000 column type short spectra into one 50000 column image:
+6 seconds on a diskless Sun-3.
+
+Join 2 512*512 images: 28 seconds on diskless Sun-3. Join 2 50*50*50
+datacubes in x, y, or z: 15 seconds.
+
+.ih
+BUGS
+
+There may be limitations on the number of input images that can be handled
+in one execution on some systems.
+
+.ih
+SEE ALSO
+immosaic, imstack, imslice
+.endhelp
diff --git a/pkg/proto/vol/src/doc/proj.hlp b/pkg/proto/vol/src/doc/proj.hlp
new file mode 100644
index 00000000..f0ed8a3e
--- /dev/null
+++ b/pkg/proto/vol/src/doc/proj.hlp
@@ -0,0 +1,139 @@
+.help volumes Jan89 "Volume Rotation-Projection Algorithm"
+
+.ce
+Volume Rotation-Projection Algorithm
+.ce
+January 1989
+
+.sh
+Introduction
+
+See help for VOLUMES and PVOL for general information. Here we describe
+the volume projection algorithm used in PVOL.
+
+.sh
+Algorithms for Collecting Object Voxels that Project onto Image Plane
+
+PVOL is a task for making successive projections through a 3d image onto
+2d images placed along a great circle around an input datacube, with varying
+degrees of translucency. The technique of viewing successive projections
+around the input datacube causes interior features to appear to "orbit"
+the axis of datacube rotation; the apparent orbital radii generate the
+illusion of seeing in three dimensions. We limit ourselves to parallel rather
+than perspective projections as the computations are simpler and the resulting
+images preserve distance ratios.
+
+When we are considering orthogonal projections only, the 3D problem becomes
+a 2D problem geometrically, collapsed into a plane at right angles to the
+datacube rotation axis. Otherwise a full 3D solution would be needed.
+To keep things straight, I will use "object voxel"
+to represent voxels from the input volume image and "image pixel" to represent
+output pixels in the projection plane.
+
+In addition to the projections being parallel, we also want them centered
+and the projection plane perpendicular to the projection rays (we always want
+to be looking toward the center of the volume regardless of the rotation angle).
+Thus we will always orient the center of the projection plane perpendicular
+to the ray passing through the center of the volume for the given rotation
+angle.
+
+Methods in the literature include back-to-front (BTF) and front-to-back (FTB)
+traversals, digital differential analyzer (DDA) techniques, and octree
+encoding. Because of the nature of our light-transmission algorithm, we
+must choose a BTF approach. For standard ray-tracing applications, involving
+discrete objects within the volume image space, octree techniques can be
+the most efficient, depending on the ratio of filled to un-filled space and
+number of objects. However, for arbitrary voxel images (no explicit geometric
+surfaces included, so every voxel must be examined) simpler techniques are
+considered more efficient. There are basically two approaches:
+[1] image-plane order: build up the output image one line at a time by
+computing all contributing voxels, and
+[2] volume-image order: traverse the voxels one line at a time, building
+up the output image in successive overlapping sheets.
+
+The image-plane order approach is similar to rasterizing a line segment, namely
+the projection ray through the lattice of voxels. Examples are the incremental
+algorithm discussed in Foley and Van Dam (p. 432), implemented with
+modifications in the IRAF SGI kernel, and Bresenham's algorithm, outlined in
+the same place. Both methods can be extended to include information from
+extra surrounding voxels, similar to anti-aliasing problems, and this may
+be necessary for effective volume projections, especially of small spatial
+resolution volumes. This approach may not necessarily be the most efficient
+if the volume image cannot be held in memory and must be accessed randomly
+from disk. Initially, we will code this algorithm only for the case where the
+rotation is around the X axis of the volume and the viewing direction is
+perpendicular to that axis.
+
+[Discussion of various algorithms for determining which set of voxels gets
+included along a given projection ray follows. After this was coded, it
+became apparent that runtime was largely dominated by the voxel memory
+accesses after the voxel lists have been prepared. Consequently, the
+incremental algorithm is all that is now used.]
+
+The straightforward incremental algorithm would be the simplest to implement,
+though not the most efficient. Bresenham's algorithm, extended to include
+information from fractionally pierced neighboring voxels, would be more
+efficient as it need not use any real variables, and therefore does not
+require rounding. Both these methods choose a single ray at a time hitting
+the projection plane, and proceed along that ray, determining which voxels
+contribute, and their weights, which are proportional to the path length
+of the ray through the object voxels. By proceeding from back to front, we are
+guaranteed that each contributing voxel from the volume succeeds any previous
+one arriving at the current output pixel. Thus, we can use the output
+pixel to store the results of any previous light transmission and absorption
+operation, and feed that value back in to combine with the properties of
+the next contributing volume voxel. This method fills up the image plane
+in line-sequential order. Of course, we determine the list of object voxels
+contributing to a given line of image pixels only once per rotation.
+
+In the volume-image order approach the input voxels are traversed line by line
+in any correct BTF order; they can always be accessed band by band if that is
+the disk storage order. This method fills up the image plane in successive
+sheets, continually updating the image pixels previously written as it goes.
+Determining which image pixel should be hit by the current object voxel
+requires a transformation matrix. However, the information in the matrix can
+be pre-multiplied with all possible values of voxel coordinates and stored in
+a lookup table, resulting in much more efficient code than a straightforward
+matrix multiplication for each object voxel (Frieder, Gordon, and Reynolds,
+IEEE CG&A, Jan 1985, p. 52-60). Due to the significantly increased
+computation time, this approach should only be used when datacube projections
+are desired along any arbitrary 3D orientation.
+
+In the current implementation only rotations by PVOL around the image X
+axis are allowed. If rotation is desired about either Y or Z, it is easy
+to first rotate the input image, then run PVOL around the new X axis.
+See D3TRANSPOSE [IMTRANS3D?] for help in rotating datacubes.
+
+.sh
+Memory Management
+
+Now we know how to construct a list of indices of input voxels in
+BTF order that impinge upon a given pixel in the projection plane.
+The original PVOL prototype used line-oriented image i/o to access
+the datacube. Profiles showed 90% of task execution time spent in
+OS-level reads. Various other approaches were investigated, which
+determined that actual voxel-value i/o was the most important factor
+in performance. Since "in-core" i/o is the fastest, the problem became
+one of getting as much of the input datacube into main memory as possible.
+
+A task maximum working set size parameter was added, and code for attempting
+to grab this much memory, then cascading down to a reasonable amount if
+the requested amount was too much (had adverse effects on PVOL or other
+processes). Given a fixed amount of available memory smaller than that
+required to hold the entire datacube in memory, the fastest way is to
+volume-project through successive groups of YZ slices. A single YZ slice
+of the datacube is sufficient for projecting any and all great-circle
+orientations (360 degrees around the X axis). The more YZ slices that
+can be held in memory, the better. If there is room for N YZ slices at
+a time, and there are COLUMNS voxels in the X direction, then all volume
+rotations must be made in each of (COLUMNS/N) passes.
+
+This approach sped things up by about a factor of 20 over random
+line-oriented i/o. For very large datacubes (order of 500 voxels on
+a side) there are on the order of 10 passes required when the task
+working set is in the 10Mb range. Clearly available memory and/or super
+fast disk i/o, dominates volume rotations. A general purpose workstation
+with enough main memory can apparently approach the speed of the specialized
+processors usually used in volume rendering.
+
+
diff --git a/pkg/proto/vol/src/doc/pvol.hlp b/pkg/proto/vol/src/doc/pvol.hlp
new file mode 100644
index 00000000..30ae4f38
--- /dev/null
+++ b/pkg/proto/vol/src/doc/pvol.hlp
@@ -0,0 +1,398 @@
+.help pvol Jan89 volumes
+.ih
+NAME
+pvol -- project rotations of a volume datacube onto series of 2d images
+.ih
+USAGE
+pvol input output
+.ih
+PARAMETERS
+.ls input
+Input 3d or 4d image (datacube).
+.le
+.ls output
+Output datacube, one image band per rotation (type real only).
+.le
+.ls nframes = (360 / \fBdegrees\fR)
+Number of frames to generate, 1 per rotation.
+.le
+.ls degrees = 10
+Number of degrees to rotate datacube for each successive projection.
+.le
+.ls theta0 = 0.0
+Initial projection angle for rotation sequence by \fBdegrees\fR increments.
+Measured counterclockwise from +x axis when looking back toward the image
+origin.
+.le
+.ls ptype = 2
+Projection type;
+1 = opacity: attenuation along projection column by voxel opacity value.
+2 = average voxel intensity along projection column.
+3 = sum of voxel intensities.
+4 = proportional distance weighting: voxel intensity
+along projection column weighted by (curvoxel / voxels_in_column)
+**\fBdispower\fR.
+5 = mod(n): same as proportional distance weighting, but use only voxel values
+which match mod(normalized_voxel * 100) = \fBmodn\fR.
+6 = use last voxel value within cutoffs only.
+.le
+.ls imin, imax = INDEF
+Input voxel intensity ranges within which to apply intensity transformation.
+Defaults to input image min and max if not specified (see comments below).
+.le
+.ls omin, omax = INDEF
+Input voxel opacity ranges within which to apply opacity transformation.
+Defaults to input image min and max if not specified (see comments below).
+.le
+.ls amin, amax = 0.0, 1.0
+Attenuation factor minimum and maximum for ptype=1 (opacity). Voxel values
+<= omin map to attenuation factor amin, >= omax map to attenuation amax.
+.le
+.ls izero = 1.0
+Initial background iillumination intensity when \fBptype\fR = 1 (opacity).
+This intensity will be attenuated consecutively by (transformed voxel_value *
+\fBoscale\fR)
+along the projection column toward the projection plane.
+.le
+.ls oscale = 1.0
+Voxel opacity scale factor. Multiplied by voxel value before attenuating
+remaining light along projection column for \fBptype\fR = 1.
+.le
+.ls opacelem = 1
+Opacity element in 4th dimension of input image. When input image is 4d,
+and there are two elements in the 4th dimension, the \fBopacelem\fR element
+will be treated as opacity and the other will be considered intensity.
+.le
+.ls dispower = 2.0
+Inverse distance weighting power for \fBptype\fR = 4,5. Voxel intensities will
+be multiplied by (voxel position in column / voxels in column) **
+\fBdispower\fR before being summed into the output projection pixel.
+.le
+.ls discutoff = no
+When distance weighting, measure the distance within that set of projecting
+voxels that lies between the intensity cutoffs rather than from
+the edges of the datacube. Usually results in faster run times and is
+appropriate when the interior of a well-defined object is of interest
+rather than its placement inside the datacube.
+.le
+.ls modn = 10
+For ptype=5, only voxel values satisfying mod (int (voxval * 100.0)) =
+\fBmodn\fR will be proportional distance-weighted and summed into
+projection pixel. Useful for viewing volume interiors with high contrast
+voxel values (like solid objects in an otherwise empty datacube).
+.le
+.ls vecx = 1.0
+Rotation axis X vector. Part of the specification of a three-dimensional
+orientation vector around which the datacube will appear to rotate when
+viewed from the front. PROTOTYPE only supports rotations around the x axis.
+.le
+.ls vecy, vecz = 0.0
+Rotation axis Y and Z vectors. In prototype, must be zero.
+.le
+.ls title = ""
+Output datacube title for rotation sequence.
+.le
+.ls maxws = 2000000
+Maximum workingset size in chars (usually 2 bytes). Decrease if machine
+performance degrades noticeably during a run. Increase if the machine has
+lots of memory and PVOL does not affect other processes.
+.le
+.ls abs = no
+If yes, take absolute value of voxel before applying any transformation.
+.le
+.ls verbose = yes
+Report memory usage, progress around the rotation, and more detail on
+errors if yes.
+.le
+
+
+.ih
+DESCRIPTION
+
+PVOL is used for visualizing the interiors of three-dimensional images.
+Opacity and intensity information is used to construct projected 2d images
+approximating an "xray" view through the original "solid", with varying
+amounts of apparent translucency. Playing the resulting 2d images back
+rapidly as a filmloop generates the impression of a rotating translucent
+datacube inside of which you can view much of the original information with
+the illusion of seeing it in 3 dimensions.
+
+Given an input datacube plus rotation and projection parameters, PVOL
+produces a series of projected 2d images written out as another datacube.
+Rotation parameters control the number of frames to project, their
+angular separation, and the 3 vectors comprising the axis of rotation.
+In the prototype, only one rotation axis is allowed, counterclockwise
+about the X-axis when viewed facing the origin from +X (however, the user
+is viewing the datacube from -Z, and so sees the datacube rotating toward
+him/her). When off-axis rotations are added, the view angle will still be
+from the front of the datacube.
+Non-orthogonal rotations in the prototype will have to be accomplished by
+first rotating the input datacube appropriately with other tools.
+
+Projection parameters
+provide control over the appearance of the projected images. They may be
+tuned to visually enhance the apparent placement of interior regions in three
+dimensions during the rotation sequence. Frames from the output datacube
+may be viewed individually on standard image display devices, may be
+played back rapidly with filmloop tools, or may be recorded to video as
+smooth, rotating volumes. [At present the only filmloop tool available to us
+is MOVIE on Sun workstations, which requires preprocessing the datacube
+output from this task with another task called I2SUN].
+
+Sequences where the volume's rotation axis is the same as the viewing or
+projection axis are little more useful than a block average of the datacube,
+as hidden regions never rotate into view. Volume rotations about the cube's
+X-axis (viewed from the front, or -Z) are the fastest and the only type
+implemented in the prototype.
+
+The \fBptype\fR parameter provides control over the type of projection.
+There are three main types of projection: opacity, intensity, and both
+together. If the
+input datacube is 4-dimensional, with two elements in the 4th dimension,
+both opacity and intensity information will be used -- first the remaining
+light along the projection will be attenuated by the opacity function, then
+the new voxel's intensity contribution added, according to \fBptype\fR. Before
+the projection function is applied, the raw voxel intensity or opacity is
+clipped and scaled by transformation functions under control of task
+parameters.
+.PP
+The image MIN and MAX must be present in the input image header, or they
+will default to 0.0 and 1.0 and a warning will be issued (run IMAGES.MINMAX
+with \fBupdate\fR=yes to set them if not already present).
+If intensity information is being used, \fBimin\fR and \fBimax\fR
+must be specified, or they will default to the image min and max.
+First we consider the intensity/opacity transformation functions, then we
+discuss how the transformed value contributes to the final projected image.
+
+.nf
+ Intensity transformation:
+
+ if (voxval < imin)
+ newval = imin
+ else if (imin <= voxval && voxval < imax)
+ newval = im_min + (im_max-im_min) * (voxval-imin)/(imax-imin)
+ else
+ newval = imax
+
+ Opacity transformation (0.0 <= attenuation <= 1.0):
+ if (voxval < omin) # let maximum amount of light through
+ attenuation = amax
+ else if (omin <= voxval && voxval < omax)
+ attenuation = amin + (amax-amin) * (voxval*oscale - omin) /
+ (omax-omin)
+ else # let minimum amount of light through
+ attenuation = amin
+
+.fi
+
+The intensity class of projections includes \fBptype\fR = 2, 3, 4, 5, and 6.
+The default, \fBptype\fR 2, results in the AVERAGE transformed intensity along
+the projection column, while type 3 yields the SUM of transformed intensities.
+
+Type 4, PROPORTIONAL DISTANCE WEIGHTING, is used in conjunction with the
+\fBdispower\fR parameter to weight the transformed voxel intensities by
+their inverse proportional depth along the projection column.
+If \fBdiscutoff\fR is no, the default, the distance will be that portion of
+the datacube intersected by the projection ray, measured starting at the
+rear (far side from the projection plane). If \fBdiscutoff\fR is yes,
+the distance will be measured between the first and last voxels that fell
+between the cutoffs \fBimin\fR and \fBimax\fR.
+This projection generates a kind
+of depth cueing often useful in determining visually during filmloop playback
+which portions of the rotating image are in the foreground and which in the
+background (and how far). The distance weighting is accomplished as follows,
+where voxposition and totvoxels are determined according to \fBdiscutoff\fR:
+
+.nf
+ \fBptype\fR = 4 (distance weighting):
+ newval = newval * (voxposition / voxelsincolumn) ** \fBdispower\fR
+.fi
+
+\fBptype\fR = 5, MODULAR PROPORTIONAL DISTANCE WEIGHTING, is useful for better
+seeing into the interiors of high-contrast datacubes. Rather than using each
+voxel value along the projection column, only certain voxel values contribute,
+based on the \fBmodn\fR parameter (sometimes it is necessary to artificially
+"thin out" the data to see far enough into or through it).
+
+.nf
+ \fBptype\fR = 5 (modular distance weighting):
+ if (mod (int (newval/val_range * 100)) = \fBmodn\fR)
+ use newval as in normal distance weighting
+ else
+ ignore newval
+.fi
+
+\fBptype\fR = 6 results in only the LAST transformed voxel intensity that
+is between the \fBimin\fR and \fBimax\fR cutoffs being used. This corresponds
+to seeing only the outer surface of datacube interior regions between the
+cutoffs (though since not every projection ray will pass through voxels
+right on the cutoff boundary, this will not necessarily result in a three
+dimensional intensity contour of an interior object; i.e. the intensities
+of those outer voxels can vary).
+
+OPACITY information can be used in viewing the interiors of 3d images, unlike
+in 2d images. For \fBptype=1\fR parallel rays of light may be pictured
+shining through the datacube toward the projection plane, along the normal
+to that plane. The voxel values in this
+case are considered to represent a degree of opacity, and a column of light
+will be attenuated by each voxel according to a function of its opacity value
+as the ray proceeds through the volume. The \fBizero\fR parameter provides
+the initial incident "light" intensity before any attenuation. The
+amount of remaining light after projection through the datacube is very
+sensitive to the voxel opacities and the number of voxels in each projection
+column. Consequently, the \fBoscale\fR parameter is supplied to enable
+adjusting the relative attenuation in a single step while scouting for
+the right opacity transformation function to generate the desired effect
+during playback rotation. Given the amount of attenuation
+as determined in the opacity transformation function above, for each
+contributing voxel along the projection column:
+
+.nf
+ projection pixel = projection pixel * attenuation
+.fi
+
+If the input image is 4-dimensional, with 2 elements in the 4th dimension,
+voxel intensities will be added after attenuation
+to contribute to the total projected pixel value (like a cloud
+with both absorption and emission). For
+purposes of visualization only, it is not necessary that the voxel value
+represent a physically real opacity; any data value may be treated as
+attenuating an imaginary xray passing through the solid in order to help
+image the volume in three apparent dimensions.
+
+For all of the projection types, once the modified intensity
+has been determined, it contributes to the output pixel onto which the
+current, arbitrarily-oriented column of voxels projects. To summarize:
+
+.nf
+ 1 OPACITY:
+ proj_pix = proj_pix * attenuation
+ 2 AVERAGE:
+ proj_pix = proj_pix + newval / nvox
+ 3 SUM:
+ proj_pix = proj_pix + newval
+ 4 INVDISPOW:
+ proj_pix = proj_pix + newval * (vox/voxincol)**dispow
+ 5 MOD:
+ if mod (int (newval/val_range * 100.0)) = \fBmodn\fR
+ proj_pix = proj_pix + newval * (vox/voxincol)**dispow
+ 6 LASTONLY:
+ if (\fBimin\fR < newval && newval <= \fBimax\fR)
+ proj_pix = newval
+
+.fi
+
+.ih
+PERFORMANCE AND SIZE CONSTRAINTS
+
+Projections through 3d images inherently require large amounts of memory,
+or else the tasks will spend all their time thrashing with I/O. In volume
+rotations about the X-axis, each output pixel is derived by projecting at
+an arbitrary angle through a YZ slice of the input image. Because of otherwise
+excessive thrashing, PVOL requires sufficient memory for at least one YZ
+slice. The more YZ slices that will fit into memory at one time, the better,
+because I/O is more efficient the larger the chunk of the image that can
+be read at one time. It is best if the entire image will fit into memory,
+as the output image (all rotations) will not have to be reread for each
+successive chunk of YZ slices. Available memory is that actually allocable
+by PVOL for the slices plus one line of the output image. On a workstation
+there will usually be considerably less memory available for PVOL than
+the amount physically in the machine if running in a window environment.
+Examples of the number of YZ slices that will fit based on image size and
+available memory follow; image datatype is assumed to be REAL -- multiply
+number of YZ slices by 2 for SHORT images.
+
+.nf
+ Usable Memory Image Size Approx YZ Slices
+ ------------------------------------------------
+ 1 Mb 64*64*64 64 (whole image)
+ 1 Mb 512*512*512 1
+ 4 Mb 101*101*101 101 (whole image)
+ 4 Mb 1024*1024*1024 1
+ 8 Mb 128*128*128 128 (whole image)
+ 8 Mb 1448*1448*1448 1
+ 16 Mb 161*161*161 161 (whole image)
+ 16 Mb 2048*2048*2048 1
+ 32 Mb 203*203*203 203 (whole image)
+ 32 Mb 2896*2896*2896 1
+ 64 Mb 256*256*256 256 (whole image)
+ 128 Mb 322*322*322 322 (whole image)
+ 512 Mb 512*512*512 512 (whole image)
+.fi
+
+PVOL checks to see how much memory it can grab, then actually allocates
+somewhat less than this (otherwise you wouldn't be able to do anything
+except run IRAF tasks already loaded in the process cache until PVOL
+finishes). With \fBverbose\fR on, the task reports memory usage figures.
+On some machines the system will continue to allocate more memory for a
+task even above that reported by PVOL. This can be a problem if you fire
+up PVOL from a workstation (even with lots of windows already open);
+after you log out, the system may grab that extra memory you were using,
+and not even let you back in later. This is why the \fBmaxws\fR
+parameter is supplied -- lower it if this type of behavior is experienced.
+
+.ih
+EXAMPLES
+
+.nf
+1. Produce 36 rotation projections (one every 10 degrees) around the
+ x-axis of a datacube, viewed from the front (negative z
+ direction). Assume that the single-valued input voxel values
+ are intensities, and that the image header contains MIN and MAX.
+
+ cl> pvol input output
+
+2. Generate 180 frames, one every two degrees.
+
+ cl> pvol input output nframes=180 degrees=2
+
+3. Use inverse proportional distance cubed weighting in two
+ subsampled projections for a quick look. Distance-weight
+ only between projection voxels falling within the specified
+ cutoffs (0.1 to 1.0).
+
+ cl> pvol input[*:4,*:4,*:4] output nfr=2 deg=90 ptype=4 \
+ dispower=3 discutoff+ imin=.1 imax=1.0
+
+4. Project through a 4d image containing opacity information in
+ element 2 of the 4th axis and intensity in element 1. Scale
+ the voxel opacities by 0.1 to allow more light through. Use
+ the SUM of the voxel intensity values (which will be attenuated
+ by subsequent opacities), with no distance weighting.
+
+ cl> pvol input output ptype=3 opacelem=2
+
+.fi
+
+.ih
+TIMINGS
+
+1min 12sec cpu on an unloaded Sun-4 to produce
+36 rotation increments around a 50*50*50 datacube with \fBptype\fR=2
+(uses less than 1 Mb of memory for image data); 46sec for \fBptype\fR=1;
+2min 19sec for \fBptype\fR=4.
+
+4min 32sec cpu on an unloaded Sun-3 with 8 Mb memory to do 36 steps around a
+50*50*50 datacube with \fBptype\fR=2 (also uses less than 1 Mb);
+3min 20sec for \fBptype\fR=1; 10min 51sec for \fBptype\fR=4.
+
+17hr 20 min cpu on a Sun-4 to do 36 rotation steps around a 450*450*450
+datacube with \fBptype\fR=4.
+
+.ih
+BUGS
+
+Maximizing memory usage without adversely impacting other functions can be
+tricky. Adverse effects may result from using too high a \fBmaxws\fR.
+
+Cannot rotate around arbitrary axis yet.
+
+Lacks shading algorithm.
+
+Needs easier user interface to adjust translucency parameters (e.g. with
+mouse when workstations become fast enough to do this in real time).
+
+.ih
+SEE ALSO
+i2sun, im3dtran, im3dstack
+.endhelp
diff --git a/pkg/proto/vol/src/doc/volumes.hlp b/pkg/proto/vol/src/doc/volumes.hlp
new file mode 100644
index 00000000..4ebb4aeb
--- /dev/null
+++ b/pkg/proto/vol/src/doc/volumes.hlp
@@ -0,0 +1,56 @@
+.help volumes Jan89 "Volumes Package"
+
+***** NOTE: This is just a suggested package organization and will
+***** definitely NOT be the final one chosen.
+
+.ce
+Volume or 3d Image Applications in IRAF
+.ce
+January 1989
+
+.sh
+Introduction
+
+The Volumes package collects tasks related to manipulating and displaying
+volume images (3d images, or datacubes). Although all IRAF images can be
+multidimensional (currently up to 7 dimensions), not all applications tasks
+are equipped to handle images of dimension greater than 2. Examples of
+tasks that are so equipped are IMARITH, IMSTATISTICS, BLKAVG, and DISPLAY
+for looking at arbitrary 2d sections of higher dimensional images.
+
+Volumes applications include tasks for manipulating the orientation of
+a 3d image, joining 3d images, projections of datacube contents
+onto 2d images, and tasks related to viewing a datacube or its projections
+as a movie.
+
+.ih
+Datacube Manipulation Tasks
+
+D3TRANSPOSE 3d transpose, any axis to any other axis
+IMJOIN join 2 or more 3d images together along specified axis
+IMCOPY
+BLKAVG
+IMSLICE
+
+.ih
+Datacube Generation Tasks
+
+BINTOIM [not in VOLUMES; probably still PROTO after upgrade to 3d?]
+POINTOIM convert n-dimensional point data into volumes in datacube
+MANDEL4 4d Mandelbrot set generator
+
+.ih
+Volume Projection Tasks
+
+PVOL project volume contents onto series of 2d images
+SLICEVOL* "cubetool" -- slice off faces of datacube rendered from
+ arbitrary angle w/translucency
+
+.ih
+Movie-Related Tasks
+
+IMTOSUN convert datacube or list of 2d images into Sun rasterfiles
+IMTOVID (script) record set of 2d images onto panasonic video recorder
+CUBETOVID (script) record sliced from databube onto video recorder
+
+* = [not yet implemented]