This post discusses the steps taken in processing NGC 4567 and NGC 4568 – Siamese Twins.
Pixinsight price pesho murder lack of vitamin b6 disease. PixInsight Full Download =Cracked= PC/Mac OS X (PixInsight Full Cracked Full) Free Download We have recently release the latest PixInsight Full cracked. Festival crash ferry corsten remix 2017 germany league 2 standings call of duty black ops 2 all zombie maps list qua. Page 2 of 3 - PixInsight Benchmark - How low can you go? - posted in Beginning and Intermediate Imaging: juan develops PI under linux and im sure if it were economically viable hed only produce a linux version of PI. The thing that enables PI to run on windows and OSX is the Qt gui toolkit which is cross-platform. In the latest version of PI, there have been lots of problems with openGL. A screen capture will be created as a new image inside the application. It is possible to select between capturing the desktop, only the present workspace or all the workspaces.The capture time can also be delayed, as well as the options about the cursor can be changed.
Acquisition
Subexposures for this image were taken on 15 different nights from our remote observatory in Fort Davis, Texas, using a Planewave CDK 17″ f/6.8 telescope on an Astrophysics 1100GTO mount and an FLI PL16803 CCD camera. ACP was used to take 120 20-minute subexposures in LRGB, totaling 40 hours of integration.
Processing Strategy
PixInsight and Photoshop were used to process this image. PixInsight excels at preprocessing and integration, noise reduction, deconvolution, color calibration and color combination. Photoshop excels at targeted detail extraction, sharpening, noise reduction and final color balancing.
Preprocessing
File Types Used throughout processing
Individual subs taken with our imaging systems are saved as FITS files. Once we begin processing these subs in PixInsight, we use their .xisf file format. .XISF files are saved with the following settings.
Finally, we use 16-bit TIFF images to move files back and forth between PixInsight and Photoshop.
BatchPreProcessing (BPP) Script
Our preprocessing of this image began using PixInsight’s Batch Preprocessing (BPP) script. This was the first time we didn’t first create master bias and master dark frames for our preprocessing. Instead we added all bias and dark subs into the BPP script. The reason for this is to avoid clipping when calibrating the dark frames or master dark frame (see section 4 of this post for details: https://pixinsight.com/forum/index.php?topic=11968.msg73522#msg73522). We use the BPP script only for image calibration. Although the script can also perform image registration and integration, we include several other manual steps in our preprocessing before registration.
Blinking Images
Once we have our images calibrated, we review them using PixInsight’s Blink process. This process reveals any subs that should be thrown away because of apparent issues such as clouds, poor tracking, and other visual anomalies. We are careful to only blink subs from a single filter so that there is a valid visual comparison across subs. Rejected subs are moved into their own sub-folder (the Blink process handles this) so they won’t be used in successive steps.
Cosmetic Correction
PixInsight’s CosmeticCorrection process is optional, and is primarily used to remove hot/cold pixels. We don’t have serious hot/cold pixel issues with our cameras, and hot/cold pixels are usually effectively rejected during image integration. However, we do use this process to remove defect columns. This hasn’t been an issue with our PL16803 CCDs, so we did not perform CosmeticCorrection on the Siamese Twin subs. We usually require column defect correction for our ML16200 and ML/PL29050 CCDs.
Subframe Selection and Weighting
Next we grade our subexposures using the SubframeSelector (SS) process. This shouldn’t be confused with the SubframeSelector script which has been a part of PixInsight for years. The SS PCL process is relatively new. If you do not see the SS process listed in your process menu, you will have do download and manually install the PCL process. You can download the SubframeSelector PCL process for both Mac and PC from this post: https://pixinsight.com/forum/index.php?topic=11780.0. Make sure to read through the entire discussion so that you download the latest version of the script. The SS process runs a lot more quickly than the script, saving a great deal of time during preprocessing. It also caches your image data, so returning to a group of subexposures doesn’t require remeasuring the set. To use the SS process, make sure to enter the system parameters for your system, camera and bin level. Run one filter/bin combination at a time. After measuring the subexposures, we review the graphs and charts to exclude any subs that don’t meet our requirements. This can be done automatically with an Approval formula however we prefer reviewing the data ourselves. For example, we look at the Star Support graph or table. If all of the subs have 4,000 stars, but a few have 1,000, then it’s pretty clear there was either a focus issue, or clouds, or humidity for the subs with fewer stars. We can eliminate those subs without fear of throwing away good data. In short, we look for outliers that don’t fit with the bulk of the data. We hate throwing away data, but adding poor data is worse than integrating less data.
Once we have excluded any undesirable subs, we use the following Weighting formula to output the approved subs, with the weight captured in a new FITS keyword “SSW”:
(FWHM*((1/(FWHM*FWHM))-(1/(FWHMMax*FWHMMax)))/((1/(FWHMMin*FWHMMin))-(1/(FWHMMax*FWHMMax)))+Eccentricity*(1-(Eccentricity-EccentricityMin)/(EccentricityMax-EccentricityMin))+SNRWeight*(SNRWeight-SNRWeightMin)/(SNRWeightMax-SNRWeightMin)+(100-(FWHM+Eccentricity+SNRWeight)))/100
We do one additional critical step during the SS process: we note the best subexposure for each filter. We usually select the subexposure with the most stars, but we also make sure that the FWHM for the selected sub is good. The selected sub for each filter will be used in a later step when we apply the LocalNormalization process, and the best L sub will be used as the reference image during StarAlignment. The subs we selected for the Siamese Twins image were:
- L and Registration: NGC_4567_date_20180213_time_044513_Lum_exp_1200s_bin_1_002_c
- R: NGC_4567_date_20180221_time_031707_Red_exp_1200s_bin_1_001_c
- G: NGC_4567_date_20180221_time_022922_Green_exp_1200s_bin_1_002_c
- B: NGC_4567_date_20180221_time_045211_Blue_exp_1200s_bin_1_002_c
Note: for those still using the SubframeSelector script, the above formula will not work. Instead, we use a spreadsheet posted in various locations online by David Ault to create our formula based on SubframeSelector script measurements.
StarAlignment for Image Registration
We use the StarAlignment process to align all of our subexposures. We use the best L frame as our reference image, and keep the rest of PixInsight’s defaults. For this image we did not Drizzle the data.
LocalNormalization
After registration, we run the LocalNormalization process on our subs, one filter at a time. For each filter group, we use the best subexposure of that group (as noted during the SubframeSelector process) as the reference image. Running this process now will provide better rejection and noise reduction when the subexposures are integrated. This process takes a bit of time to run, but is fully automated. The only change we make to the process settings is to set the scale to 256 (from the default 128).
ImageIntegration
We are finally ready to integrate our individual filter subexposures using the ImageIntegration process.
This screenshot shows our typical settings for ImageIntegration:
Notes on ImageIntegration settings:
- After adding your files for integration, make sure you add the LocalNormalization files (see the blue box in the screenshot)
- Make sure to select FITS Keyword for weighting, and enter the correct keyword you applied during the SubframeSelector process (we use “SSW”)
- We’ve found that by tweaking Buffer Size and Stack Size settings we can greatly improve the speed of this process. We set Buffer Size (in Mbytes) to just larger than the size of the files being integrated. At this stage of our Siamese Twins processing our .xisf files were each 65MB, so we set Buffer Size to 66MB. For Stack size, the larger the better, but this is dependent on available system RAM. We apply 20GB of the available ram on our machine to this process. You can leave this at the default with no ill effect except for overall speed of the process
- Since we typically have more than 20 subexposures for each filter, we use Linear Fit Clipping as our rejection algorithm. For Normalization, remember to select Local Normalization so that the LocalNormalization files are applied
- In Pixel Rejection settings, we find that using values of 4 and 2 for Linear fit low/high provide the best rejection of satellite trails, cosmic rays, etc.
Postprocessing – PixInsight
Following are steps we took preparing the LRGB image in PixInsight.
Dynamic Crop
PixInsight’s DynamicCrop process allows you to precisely crop all of your master images to the same crop region. To do this, open all of the master images (L, R, G, B). Using the DynamicCrop process, draw your desired crop region on any one of the images (here we’ve drawn it on L). Take care to observe how this crop would look on all four images, making sure that all dark edges will be cropped out from all images with the crop you are drawing. Once you are satisfied with the crop region, DO NOT apply the crop. Instead, drag the Instance triangle from the bottom left of the DynamicCrop process window onto the PixInsight workspace. This will save the precise location/dimension settings of the crop area. You can rename this saved instance by right clicking on it, and even save it to your hard drive (this might be useful if you intend to add more data in the future and would want to match the crop of this old data). Once you have the instance on your workspace, cancel the DynamicCrop process without applying it to the L image. Now, all you have to do is drag the DynamicCrop instance on each of the four images, and the precise crop will be applied to each of them.
Wireless usb device driver for mac. You’ll want to save the newly cropped images with a new name (we typically name it “L_crop”). Here is the L channel before and after applying Dynamic Crop:
L Channel Processing
Processing of the cropped L (detail) channel typically follows these steps:
- DynamicBackgroundExtraction (DBE)
- Noise reduction (linear)
- Deconvolution
- HistogramTransformation (stretching)
- Noise reduction (nonlinear)
- Curves and additional HistrogramTransformation
DynamicBackgroundExtraction (DBE)
The DBE process is unique for each image, and for each channel. We didn’t save an interim image showing our DBE for the Siamese Twins. There are many good tutorials about DBE on the web, but this is one area where we could use some improvement. In any case, DBE should be used to remove as much of the uneven background as possible (from external light sources, imperfect flat field removal, etc.).
Noise Reduction (linear)
Noise reduction while the image is in its linear state is critical, as it will allow you to more aggresssively apply deconvolution and stretch the image without exaggerating noise. we use the noise reduction techniques described in this excellent tutorial for linear noise reduction: https://jonrista.com/the-astrophotographers-guide/pixinsights/effective-noise-reduction-part-1/. Here is a comparison of the L image before and after noise reduction. Pdf2id free mac. Note, this is PixInsight’s default stretch:
As you can see, there is a definite improvement in the overall noise profile. Not all of the noise is removed. We will take additional steps for noise reduction later.
Deconvolution
In March, 2018 we were contacted by Adam Block asking us if we would test out his new deconvolution video. The video is part of his suite of PixInsight processing tutorials, so we cannot share it here. His detailed demonstration and explanations have greatly improved the results we get from deconvolution. In short, the process requires creating a Point Spread Function (PSF) from stars in the image, an overall image luminance mask for targeting areas for sharpening, and a local support mask to protect larger stars from the deconvolution process. Each image requires different tweaked settings. Adam’s video explains how the settings work, and shows how to build each of these three components required for the process. Here is the L channel before and after deconvolution:
Detail has clearly been enhanced. Just as importantly, larger stars remain untouched. Stars in front of the galaxy display no dark ringing (this is due to a well made local support star mask, and using proper local deringing settings within the Deconvolution process.
Additional Noise Reduction
At this stage in processing, we sometimes do another round of noise reduction. However, in this case we did not.
Pixinsight For Mac Download
HistogramTransformation (stretching)
The HistogramTransformation process is used to stretch the image. With some galaxy images which have very high dynamic range we perform three stretches (one for the core, one for the main portion of the galaxy and one for the faint outer areas), save each of the stretches as a TIFF file, and blend them together using masks in Photoshop before bringing them back into PixInsight for further processing (see our M 101 processing walkthrough for an example). For the Siamese Twins this wasn’t necessary; the initial stretch brought out detail everywhere without blowing out the core. Here is the image after stretching:
Noise Reduction (nonlinear)
Pixinsight Software For Mac
At this point, we carefully applied a round of nonlinear noise reduction in two steps. To protect areas of detail in the image, we created a clipped luminance mask. We first duplicated the original image, then used the auto clipping shadows and highlights buttons in the HistogramTransformation process to exaggerate the contrast.
This high contrast image was then applied to the original image and inverted as a mask. The red areas will be protected from noise reduction processes:
We applied two processes to the masked image: AtrousWaveletTransformation and ACDNR. Our settings are shown here:
Here is a before/after of our stretched L image showing noise reduction results:
The L image is now set aside while we process the color information.
RGB Color Channel Processing
Processing of the cropped R, G and B (color) channels typically follows these steps:
- DynamicBackgroundExtraction (DBE)
- Noise reduction (linear)
- LinearFit
- ColorCombination
- PhotometricColorCalibration
- HistogramTransformation
- CurvesTransformation and additional HistrogramTransformation
However in reviewing the processing of this image, for some reason we changed the order of the first three steps. We began with linear noise reduction, then performed a linear fit and finally DBE. This order is not our standard process, but we’re pleased with the results.
Noise Reduction (linear)
Linear noise reduction for each of the color channels follows the exactly same process as described above for L channel linear noise reduction. We can afford to be more aggressive with our noise reduction, as our image detail will come primarily from the L image. Here is the Red channel before and after linear noise reduction:
LinearFit
PixInsight’s LinearFit process is used to ensure that the relative strength of signal in each of the color channels is similar. This helps to ensure a proper color balance when combining the channels into an RGB image. Using the HistogramTransormation process, we examine the histogram of each of our R, G and B images. The image with the signal furthest to the right in its histogram will become our reference, and the other two images will be LinearFit to match the reference image. In this case, the R image had the strongest signal so using that as the reference image we applied LinearFit to G and B, and saved these. Visually we don’t see any difference in the images, but the histograms will now match better when we combine the channels.
DynamicBackgroundExtraction (DBE)
As mentioned earlier, normally we would not apply DBE after linear fit, but for whatever reason we did. We usually first set up DBE on the R channel, but before applying it we save an instance of the DBE process to the workspace. Later when applying DBE to the G and B channels, we can double click this saved instance, and it will provide a very good starting point for DBE’s point placement on these other channels. Make sure when applying DBE to select Subtraction as the Target Image Correction. Applying DBE can make it appear that the image is gaining noise, but in reality removing gradients is merely revealing noise which is already present. Here is the Red channel before and after DBE (note: these are default stretches):
ChannelCombination
The individual color channels are now ready to be combined. The ChannelCombination process is used with default settings to combine the R, G, and B channels (right) into a single RGB channel (left). You can begin to see a hint of color in the combined image but it’s clear that much more color work is required:
PhotometricColorCalibration
Color calibrating the image is a critical step in attaining proper color balance. We used to use two processes to do this: BackgroundNeutralization and ColorCalibration. However, PixInsight has added a PhotometricColorCalibration process, which uses actual color balance data from the region of the sky your image comes from to apply a proper white balance. To use this process, enter your system’s focal length and your camera’s pixel size, and search your object’s coordinates using the search tool. Applying the process will plate solve your image, calculate the proper white balance, and then apply the color calibration to your image:
HistogramTransformation
The RGB image is now ready for the HistogramTransformation (stretching) process. As you can see, after stretching there is definitely color present (especially the yellowish star near the top), but at this stage we pay more attention to exposure than color saturation when stretching. If we push color too much in the HistogramTransformation processing we risk oversaturating the image, resulting in an unnatural appearance. We want to maintain translucency in the fainter portions of the galaxy.
CurvesTransformation
We use the CurvesTransformation process to push the color hidden within the image. We sometimes look at various images online to get a feeling of how blue or dusty brown a galaxy should be, and tweak individual color channels and overall saturation using the CurvesTransformation process. Here is our image before and after CurvesTransformation. First, the entire image, then a closeup of the Siamese Twins:
We use the stars as our main indicator of how far to push the curves, and while we know there is more color to be had, we will attack this in a more controlled manner using Photoshop and Nik Plugins.
Sometimes during stretching and pushing the color with curves we end up with color areas in the background that clearly are not correct; where they should be neutral gray they take on either red, green or blue color. This often results when DBE hasn’t done a perfect job at removing background gradients. When the gradients don’t match from R to G to B, you end up with patches of background that take on the color of the channel where the background contains excess data when highly stretched. There are different ways to deal with this. You could apply a luminance mask before pushing the color, only allowing the color to be pushed on galaxies and stars. Or you could just let it happen. We find that later, in Photoshop, we can use the Nik Viveza plugin to neutralize the background or desaturate, by targeting only those areas which have taken on such colors, and even out the background brightness differences using Nik ColorEfex Pro, targeting those brighter areas with control points and bringing them down with levels or curves to match the rest of the image background. Of course if you have nebulosity or IFN in your image background you must be very careful when using these methods. True signal should never be destroyed.
LRGB Processing
Processing of the final LRGB image in PixInsight typically follows these steps:
- LRGBCombination
- CurvesTransformation
LRGBCombination
The LRGBCombination process is used to blend our Luminance detail image into our RGB color image. To use this process, we open the final L and RGB images, and we make certain that the RGB image is the active one by clicking it. This will ensure the L gets added into the color, rather than the other way around. In the LRGBCombination process window we select only our L image, then we apply the process to the color image. We play with the Lightness and Saturation sliders (lower is more), and we leave the Channel Weights untouched. When playing with this tool, we uncheck Chrominance Noise Reduction to speed up testing, but once we settle on our final slider numbers we turn this noise reduction back on. This image shows our LRGBCombination settings, with the L and RGB images at bottom, and the resulting LRGB image behind them:
CurvesTransformation
We apply one more round of CurvesTransformation to enhance overall color. In some cases, we also apply an additional round of HistogramTransformation to lighten or darken the background, but in this case we didn’t need to. Here is the before/after of our final CurvesTransformation (note: we are very careful not to lose fine detail when pushing color):
The LRGB image is saved as a 16-bit TIFF for use in Photoshop. Make sure to uncheck the “Associated Alpha Channel” box.
Postprocessing – Photoshop
Following are steps we took finalizing the LRGB image in Photoshop. We used several Nik Collection plugins and tools, as well as some Photoshop tools. The changes we made were mostly subtle, with the most noticeable being drawing out the detail and signal toward the outer edges of the Siamese Twins.
Here is the image before and after Photoshop processing:
After these adjustments, we made a final crop for a tighter presentation of the Siamese Twins, and then fractally upsized the image. Here is the final cropped image (full image details can be found here):
Last Updated: 23rd October 2015
A common approach to astrophotography has become the use of Digital SLR cameras (DSLR). These are relatively cheap, can be used for astronomy and ordinary terrestrial photography, and produce surprisingly good astronomy images so have become quite popular.
There’s a few basic steps required for getting started in DSLR astrophotography. I would summarise them as:
1. Buy a camera
2. Buy a tripod, telescope or other tracking platform
3. Acquire a piece of software to help take long exposure photographs
4. Acquire a piece of software to process (including stack) the photographs you take.
1. Buy a camera
2. Buy a tripod, telescope or other tracking platform
3. Acquire a piece of software to help take long exposure photographs
4. Acquire a piece of software to process (including stack) the photographs you take.
The question often arises from the above of what piece of software to use for stacking and processing the resulting images that you take using your camera. Or, also often the case, people don’t realise that there is software available to make this easy. So here I am going to list a few options, hopefully making it easier for anyone who finds this page.
If you know of programs which are suitable for DSLR astrophotography image processing that are not on this list please let me know, also let me know if information here needs updating. Thank you.
Software suitable for stacking and/or processing astrophotography DSLR images:
Deep Sky Stacker
This is a free and very capable piece of software for aligning, combining and performing post processing of astrophotographs from digital SLR cameras. The best thing about this software is that it’s free, and amazingly capable for something that is free.
This software will read a wide variety of file formats including Canon RAW format, and process them. I have had some issues with processing canon RAW files with respect to getting good colour balance post-stacking so often choose to first convert the RAW files to TIF before processing. This may simply be a lack of experience on my part, as I do not use this software often.
The registering capabilities of Deep Sky Stacker are very good but do not match the capabilities of RegiStar or PixInsight when it comes to getting a good alignment of frames. There are often cases I find DSS will not correctly align frames where as RegiStar and PixInsight will.
I don’t tend to like the post-processing capabilities of Deep Sky Stacker so tend to finish my use of DSS at the point it has stacked the “Autosave.tif” and take that file in to PhotoShop from there to perform post-processing.
Deep Sky Stacker’s biggest advantage is probably it’s ease of use (very intuitive and easy to use interface) and it’s flexibility with it supporting all major file formats and handling various scenarios covering most astrophotography needs.
Find Deep Sky Stacker here: http://deepskystacker.free.fr/english/index.html
Starry Landscape Stacker
This is an Apple/Mac program and a great option for those who do not use Windows. It is effectively a good alternative to Deep Sky Stacker for those who use Apple PC’s.
Find Starry Landscape Stacker here: https://itunes.apple.com/au/app/starry-landscape-stacker/id550326617?mt=12
PixInsight
PixInsight is an advanced astrophotography image processing piece of software. I now have some experience using PixInsight for processing CCD images from an SBIG ST8-XME camera and RAW CR2 files from a Canon 6D DSLR and can certainly see the potential of the software.
If you ant a one-stop-shop for astrophotography image processing and you are happy to spend the $250 on PixInsight, there’s a very good chance you need none of the other pieces of software listed on this page. Having said that, you will be up for a steep learning curve.
PixInsight operates in a very different way to other software. They even seem to put buttons on dialogue boxes around the opposite way to what is most common just to confuse the user. The difference in how processing is done and the user interface in PixInsight makes the learning curve very steep and troubling at first. There are video tutorials online which are almost essential for getting an understanding of how to use the software before you lose your hair trying, but once concerned it is proving to be very powerful. It took me a few attempts coming back to PixInsight over a few months before I became familiar enough with it and stopped hitting brick walls to be able to process FIT and DSLR images with some confidence.
Functions such as applying a LinearFit across LRGB frames, and the Dynamic Background Extraction function on any image to flatten image backgrounds are particularly useful and relatively easy to use once you understand the basics of the PixInsight user interface.
Where other processing software has failed to produce a good result of DSLR images (software such as using DSS, RegiStar and Photoshop) PixInsight has excelled and brought out more detail in images than I realised existed in the raw data.
There is no doubt to my knowledge that PixInsight is the most advanced piece of software for stacking astrophotography deep sky images. It’s set of processes and plugins is both extensive and powerful. The catch is only in it’s usability and how patient you must be to work through its steep learning curve to achieve good results.
I would suggest if you are going to use PixInsight, start with DSS and understand the basics of astrophotography image processing before you begin the daunting process of understanding how to use PixInsight. Also, if you have easy to align good quality images then you will likely get a very good result from DSS in a much quicker time frame than PixInsight which will require you to perform more steps.
If you want to process DSLR images with PixInsight you will need a beefy machine to run it on. It will easily consume all of my 16 gigabytes of RAM on my Core i7 64bit windows machine when processing a stack of 20 DSLR images. Programs such as RegiStar work in a significantly smaller footprint.
PixInsight is available as 45 day free trial.
Find PixInsight here: http://www.pixinsight.com/
StarStaX
StarStaX is a multi-platform image stacking software. From their website: https://www.markus-enzweiler.de/StarStaX/StarStaX.html
StarStaX is a fast multi-platform image stacking and blending software, which allows to merge a series of photos into a single image using different blending modes. It is developed primarily for
Star Trail Photography where the relative motion of the stars in consecutive images creates structures looking like star trails. Besides star trails, it can be of great use in more general image blending tasks, such as noise reduction or synthetic exposure enlargement.
StarStaX has advanced features such as interactive gap-filling and can create an image sequence of the blending process which can easily be converted into great looking time-lapse videos.
StarStaX is currently under development. The current version 0.70 was released on December 16, 2014. StarStaX is available as a free download for Mac OS X, Windows and Linux.
Find StarStaX here: https://www.markus-enzweiler.de/StarStaX/StarStaX.html
CCDStack
CCDStack is one of a suit of products made by CCDWare aligned to advanced usage of telescopes.
I have used CCDStack a reasonable amount now for processing images from my ST8-XME astronomy camera and find it very usable and relatively powerful. I like features such as being able to see what data is being rejected by a sigma function on light frames and doing this very quickly and easily compared to PixInsight which shows you no preview before processing the full stack. This makes it very easy to tweak stacking parameters for a good result and apply different filtering to individual frames (such as when a satellite passes through a frame, applying harsher exclusion to that frame).
CCDStack will easily in only a handful of steps register your frames, normalise (apply weighting to) frames, apply data rejection to frames and combine frames in to a stack using weighting determined by the normalisation.
I found CCDStack to be a good and logical step up from CCDSoft. It is usable and has intuitive and useful functionality. The program seems relatively light weight also, working efficiently with a large number of files.
I have not tried CCDStack for DSLR images. It does apparently open CR2 RAW files (amongst other formats) however in my quick attempt it did not open CR2 files from my Canon 6D (I’m unsure why).
Find CCDStacker here: http://www.ccdware.com/products/ccdstack/
Pixinsight Forum
Astro Pixel Processor
Astro Pixel Processor is a complete image processing software package: https://www.astropixelprocessor.com/
TBA on details – I’m still testing this one!
Maxim
Pixinsight Crack
I primarily use MaximDL for image reduction, as it’s image reduction process is very painless. Provide it with a directory of all your reduction .FIT files and it will nicely sort them in to a database of reduction groups to be applied to any image you open. Open the .FIT needing to be calibrated/reduced and it will apply the appropriate reduction frames without you choosing reduction files of the correct temperature, binning, etc. This is significantly easier than any of the other packages which all require you to do more manual work with reduction frames. The benefits of MaximDL’s reduction frame handling for .FIT files may or may not be transferred to use of DSLR raw files – I have not tried reduction of DSLR images in Maxim.
MaximDL’ stacking seems fair however I haven’t had need to use it for alignment and stacking. I also haven’t tried MaximDL for large images such as DSLR, with the largest I typically use in Maxim being those from my SBIG ST8-XME.
Find MaximDSLR here: http://www.cyanogen.com/products/maxdslr_main.htm
RegiStar
This is a fantastic piece of software for aligning and combining individual astrophotographs from digital SLR cameras. It works very efficiently with large files, is amazingly capable in aligning photographs and has quite good stacking algorithms built in as a bonus.
This software is primarily intended for simply the registering (aligning) of frames such that they can be combined. This piece of software is so good that you can combine old film images with new digital images, or digital images from different cameras with different focal lengths and all sorts. It will also easily handles field rotation (fixed tripod shots are OK) and pretty much any other distortion.
The problems I have with this software is that it does not read Canon RAW files, so conversion to some other format such as TIF is required first, that it does not handle reduction of the images which leaves you needing another piece of software (like PhotoShop) to do that manually first, and that when combining frames in to a stack it does not provide any weighting of frames or sigma exclusion of noise in frames leaving this piece of software primarily useful for registering frames and saving those registered frames, not stacking them.
RegiStar’s excellence at registering frames comes with a price, and in this case that’s about US$159.
The version of RegiStar that I am familiar with is 1.0, and it hasn’t been updated for some time (2004). This means it’s not up to date with current file types (RAW) but doesn’t detract from it’s excellent ability to align TIF images. Increasingly, as time ticks on and no further updates are published, you would be wise considering an alternative piece of software which is updated more regularly, such as PixInsight.
Find RegiStar here: http://www.aurigaimaging.com/
ImagePlus
I cannot say much about ImagePlus as I have not used it for DSLR image processing. However many people do and it comes highly recommended. You can find out plenty of information about it around the web.
Find ImagePlus here: http://www.mlunsold.com/