Reduction of the spectra from the Ondřejov Echelle spectrograph Marek Skarka Version 1.2, May 25, 2023 1 Motivation and introduction This document is a collection of necessary steps and additional recommendations for the basic reduction of the echelle spectra obtained with the Ondřejov Echelle Spectrograph (OES, Koubský et al. 2004), which is installed at the Perek 2m telescope at the Astronomical institute of the Czech Academy of Sciences in Odřejov. OES, as well as every other spectrograph, is a unique instrument having its own unique characteristics resulting in a special handling in some steps. My motivation for writing this document (and prepare further supporting materials such as video tutorials) has mainly been triggered by the need to teach my students the data reduction. Since OES has recently started to be used also by other groups at the stellar department, it may also be helpful to them. This presentation is a kind of a cookbook and/or tips&tricks collection, which describes the system of the data reduction in the Exoplanetary group. This presentation does not deal with the basic usage of Iraf and its tasks, although I give some useful hints in using Iraf. I do not deal with the principles of echelle spectroscopy here. Together with my colleagues from the Exoplanetary group, we spent quite lot of time by fine tuning the steps to get the best possible results regarding the radial velocity measurements and developed semi-automatic procedures in a form of bash and Iraf scripts that make the data reduction easier and faster. We demonstrated our methods and results in Kabáth et al. 2020. Based on the OES spectra, the first Ap star in a tight eclipsing binary system was discovered (Skarka et al. 2019) and the first brown dwarf system observed by the TESS mission was revealed (Šubjak et al. 2020), just to mention some outputs. The whole process of learning how the OES behaves regarding precise radial velocity determination has been a very tough task and lots of people have been involved. Special thanks goes to Dr. Petr Kabáth for starting using the OES on a regular basis and triggering the works in 2015. Further thanks goes to Dr. Tereza Klocová, the technicians of the 2-m telescope, students in the exoplanetary group, and mainly to Dr. Eike Guenther from the Tautenburg observatory in Germany who significantly helped to solve the main issue with the observed trends in radial velocities during the nights and who was our teacher during the whole process of learning. Complementary to this presentation, the reduction steps are summarized in the BSc thesis by David Štegner (MU Brno, 2020) and in three video tutorials (in Czech) that I prepared for my student in November 2020 to show him the very basics of the data reduction with Iraf and radial velocity determination. At the end of this presentation, examples of the Iraf commands are given. I hope that this presentation will help the students and colleagues to efficiently reduce the data from the OES. In addition, I hope that, together with the tutorials, it will serve as a basic supporting material for the forthcoming course called ‘Echelle spectroscopy and radial velocity determination’ which will be offered to students annually since the autumn semester 2021 at the Masaryk and Charles universities. I will be happy for any comment, suggestions and discussion about the data reduction because things can always be done better. The document will be continuously modified and updated. Marek Skarka, May 2023, Ondřejov 2 Log of changes May 25, 2023 Version 1.2 released, nonfunctional links fixed, some details added, links to bash scripts and video tutorials added March 10, 2021 Nonfunctional links fixed March 9, 2021 Version 1.0 released 3 The Ondřejov Echelle Spectrograph (OES) ● The OES is installed since 2005 in the Coude focus of the Perek 2-m telescope at AI ASCR in Ondřejov. ● The OES is a high-resolution echelle spectrograph with R~52000 at 5000 A fed by an optical fiber since late 2019. The CCD is cooled with liquid nitrogen and the room with the spectrograph is temperature-stabilized ±0.5 K. ● We can get SNR~10 for 12.3mag star with 1-hour exposure. ● Technical specification can be found at https://stelweb.asu.cas.cz/en/telescope/instrumentation/oes-spectrograph/, the basic description can be found in Koubský et al. 2004. The performance of the spectrograph is detailed in Kabáth et al. 2020. ● Since 2016, iodine cell is installed and can be used for precise radial velocity observations. 4 grating Objective lens Dewar wessel Iodine cell The main mechanical parts of the OES the Perek telescope Drift of the echellogram Fig. 1. Displacement of the dewar wessel after refilling nitrogen measured in April 2018 (black circles) and relative pixel shift of the position of the first aperture during one night (red diamonds). The pixel shift comes from a different night. The lines are linear fits and are plotted only for highlighting the trends and guide the eye. The detail shows the micrometer mounted on the dewar wesel for measuring the position of the wessel. ● Nitrogen is refilled manually on a daily basis. The manipulation with the Dewar wessel and subsequent evaporation of nitrogen during the night results in shifts of the echellogram between nights and additional small drift of the echellogram on a CCD in a vertical direction (Fig. 1). In addition to that, there are also random shifts due to the variations of the surrounding conditions (Fig. 1). This has an impact on the combination of the frames as well as on the wavelength calibration of the resulting spectra. ● After upgrade to fibre-fed spectrograph, the echellogram shifted significantly (Fig. 2). Fig. 2. The difference between flats taken in 2018 (top panel) and in 2021. The shift is apparent both in raw echellograms and in their profiles. 2018 2021 5 Installation of Iraf and initial setup ● The necessary packages can be downloaded from https://github.com/iraf-community and installed manually. This can be a very tough task because one needs to install lots of (mainly 32-bit) packages and additional dependencies and export correct paths. ● The easiest way to install Iraf on Ubuntu or Mint 20 is: ○ Using the software manager (version 2.16.1+2018.03.10-2, fig. below). ○ Ureka ○ Anaconda. If this fails, one can follow the instructions at https://stelweb.asu.cas.cz/~skarka/Instalace_Iraf and also the youtube tutorial https://www.youtube.com/watch?v=BtTr_F08y7o ● To be able to run GUI, x11Iraf has to be installed - the installation steps are described at https://github.com/iraf-community/x11iraf ● Iraf can be started from terminal -> xgterm -> cl (or irafcl or ecl depending on installation) 6 Installation of Iraf and initial setup ● To make things easier, it is advisable to make these steps: ○ Modification of login.cl ■ Set stdimage = imt4096 - this allows larger image and larger scale of the graphic screen. ■ Set imtype = “fits” - Iraf then automatically assumes that the names without suffice are of the fits format (for example in cl scripts or when giving names in parameters using epar). ■ Add this to login.cl: set fkinit=’noappend’ noao imred ccdred echelle rv crutil keep - this modification reduces the need of explicit calling for most of the task we need for the reduction of the echelle spectra. They will be loaded by default at the start of Iraf. ● login.cl can be found in the folder where Iraf is installed. Usually it is in a hidden folder /home/user/.iraf or in /usr/lib/iraf (when installed from the software manager) but can also be different - depends on your installation (find it, it is worth it!;-). 7 Installation of Iraf and initial setup ● To make things easier, it is advisable to make these steps: ○ Modification of login.cl ○ Modification of the xgterm - larger font, scrollable terminal: ■ Include alias newterm='xgterm -font -*-fixed-medium-r-*-*-20-*-*-*-*-*-iso8859-* -sb -fg "grey" -bg "black"&' to the hidden file /home/user/.bashrc in section with aliases (# some more ls aliases). The available fonts and their sizes in xgterm are very limited and 20 is the largest size. You can adjust the font and background according to your preferences. Scrollable terminal has only advantages. 8 Installation of Iraf and initial setup ● To make things easier, it is advisable to make these steps: ○ Modification of login.cl ○ Modification of the xgterm - larger font, scrollable environment: ○ Copy the list of ThAr lines in thar_new.dat to your Iraf installation folder /noao/lib/linelists. This file contains more ThAr lines with more precise values than the default thar.dat file. ○ Add the Ondrejov observatory to /noao/lib/obsdb.dat: observatory = “ONDREJOV” name = “Ondrejov, Czech Republic” longitude = -14:47:01 latitude = 49:54:38.0 altitude = 528 timezone = -1 - This is needed by the tasks rvcorrect and fxcor to identify our observatory The work in Iraf is possible in two ways: 1. In editing mode through command epar NameOfTheTask where the parameters of the task can be adjusted. 2. The second way is through the command line where the parameters are specified. If the particular parameters are not mentioned, then the parameters that were previously set through epar, are set. Therefore, I would suggest first to check the values through epar and only then run the task through the command line. The reduction steps run through the command line can be used in scripting. Scripts are .cl files and are run through cl < script.cl 9 Some useful hints ● The graphical window often crashes when resized or maximized (depending on the version and installation). This can be overcome by pressing ‘:’, changing the size of the window and deleting ’:’. ● Arbitrary zooming in the graphical window is done by pressing ‘w’, moving the cursor to a starting position, pressing ‘e’, moving to final position of the zoomed region and pressing ‘e’ again. ● The tasks and particular parameters are well explained at https://iraf.net/ (the tasks can be searched by their names). ● Further useful tips can be found at discussion forums on the internet. ● Iraf is a powerful package but also contains lots of bugs and can be tricky. One should always be cautions in using it. 10 Structure of the folders and files ● The raw spectra from the OES are named eyyyymmddzzzn.fit where ‘z’ means zero and ‘n’ number of the frame in a particular night. Thus, the information of file type (object, flat, zero, comp) is only in the fits header. ● Therefore, after downloading the data from the common storage, we store and sort them into the folders named after observing nights yyyymmdd (for example, August 20, 2020: 20200820). In such folders, the files are sorted according to the type of the fits frames into the following subfolders: ○ comp (comparison ThAr spectra) ○ flat (flat field frames) ○ object (contains folders with the fits files of the scientific frames) ○ zeros (bias frames) ● With such structure it is easy to orient and prepare lists of necessary files. ● Depending on a type of the analysis one would like to perform, it is advisable to include Julian date, UT of the mid exposure, rename the object files, etc. This can easily be done using Python any other scripts. 11 Steps in the data reduction The whole basic data-reduction process can be summarized in these steps: 1. Preparatory works a. Initial (visual) check of files, preparation of the lists of files, copying files to a separate folder b. Modification of the fits frames and their headers (bad pixels correction, cosmic rays removal, trimming, mid-exposure time calculation) 2. Preparation of the master frames (master flat and master bias) and bias correction. 3. Preparation of the template for the aperture extraction. 4. Preparation of the normalized flat and correction of the scientific frames by this normalized flat 5. Extraction of the apertures of the scientific frames 6. Extraction of the apertures of the ThAr spectra 7. Identification of the lines in comparison spectrum 8. Wavelength calibration of the scientific spectra 9. Normalization of the scientific spectra 10. Merging the scientific spectra 11. Final corrections 12 Preparatory works ● First of all, one must be sure that all the frames that enter the reduction process are all right and are of proper type (bias, comp, object, etc.). Visual inspection of all the frames is more than advisable before the reduction starts. ● There are lots of steps in the reduction process. One should keep track and should be able to get back to any of the steps and re-do the particular steps if necessary. Therefore, it is strongly advised that the one who reduces the spectra adopts some system in the naming of the files or creates his/her own system. ● To preserve the original files, it is recommended to create a working directory and copy all the needed .fits files there. In this folder, the analysis and data reduction take place. If something goes wrong, the files can be easily replaced by the original ones and the reduction can be run again. ● To make the work faster, it is advisable to work with lists. Because the exact steps are known in advance, it is easy to develop a system of naming (for example, in the exo group, we adopted prefix ‘bf_’ for the file that are bias and flat-field corrected, suffix ‘cont.fits’ is dedicated for files that are wavelength calibrated and normalized to continuum, etc.). Thus, before we start, we create dozen of lists. ● For performing all these preparatory steps, we use bash scripts. Example here. 13 Cosmic hits removal ● After a few tests, we use two steps of the cosmic hits removal which appeared to be complementary (at least to some extent): ○ The DCR routine (http://users.camk.edu.pl/pych/DCR/) is much more efficient than Iraf task cosmicrays. It is able to remove also large hits. To run it properly, it is necessary to copy the setup file dcr.par to the working directory. The default values work well. The syntax for using dcr is: dcr file.fits result.fits cosmics_log We usually rewrite the original files, thus, the result.fits=file.fits. The DCR cosmic removal is done within the bash script together with the other preparatory works. ● In a very few cases, DCR is not efficient in removal of all cosmic hits (mainly the small and faint ones). Thus, we additionally use the Iraf task cosmicrays (in crutil package). The comparison of the efficiency of both routines can be seen on the next slide. Cosmic removal should be applied to all frames excluding the comp frames because sharp emission lines can be confused with the cosmic hits. 14 Cosmic hits removal Original raw frame (1-hour exposure) After applying pure Iraf cosmicrays (only very few features removed) After running DCR (most of the cosmics are fully removed) 15 Bad pixels removal, trimming Fig. 2. Original raw frame (1-hour exposure) Fig. 3. The frame after bad pixels correction and cosmics removal Fig. 4. Trimmed and cleaned frame ready for further analysis. Obviously, not all cosmic hits were successfully removed. ● Bad pixels are removed by the Iraf task fixpix. It is necessary to prepare the file badpixmask that describes the positions of bad pixels and columns. It is advisable to check the badpixmask from time to time because new defects appear on the CCD. ● Because the very edges of the frames are corrupted (Fig. 1) and significant parts of the frames are empty (Fig. 2 and 3), we trim all the frames including the comp frames to the area 5:2039,500:1749 (Fig. 4) using task imcopy ● We perform these steps automatically via Iraf script. Fig. 1. Bad pixels on the edge 16 Master bias correction ● Usually we take 10 bias frames at the beginning of the night and 10 after the end of observations. ● It is not necessary to take darks because they have basically the same values as biases due to efficient cooling of the CCD. ● Before the bias frames are combined, they should be checked at least statistically by using task imstat to identify corrupted frames. ● The frames can be combined via task zerocombine, we recommend to use combine=median, other values can be left default. ● The frames can be bias-corrected by using task imarith. Fig 1. Example of the imstat output Fig. 2. Example of combined bias 17 Master flat ● Before the observing night we take 10 flat frames using the lamp. We do not take dome flats because telluric lines can be present. This can even happen for the lamp flats, thus, it is good to check the files visually and omit such corrupted flats. ● To enhance the flux and clean the defects and artifact, we create a master flat frame from the bias-corrected flat frames using task flatcombine. Again, the value of the parameter combine should be median. ● WARNING: Be sure that you are not combining flats from different parts of the night due to drift of the echellogram (see slide 4). ● Before the normalized flat can be prepared, the apertures must be extracted (next step). Fig. 1. An example of a master flat frame 18 Aperture template preparation ● The flux from the echelle orders can be extracted from every fits frame individually. This means that the center of apertures and their shape needs to be defined for every single frame from scratch. This is quite time consuming. In addition, defining apertures in faint targets can be dangerous because the apertures are poorly defined. On the top of that, it is good to have exactly the same number of apertures at the same positions for all the frames. Thus, it is better to define the shape of every order using one good-quality frame and then use it as a template for aperture extraction of other spectra. Such aperture template can be prepared from bright star frame or flat field frame. ● We usually observe a star of an early spectral type at the beginning of every observing night (Vega in summer, beta Per in winter). Early type of the aperture template star is beneficiary because also the echelle orders in blue part of the spectrum are well defined (compare the bottom parts of Vega and flat field spectra in Fig. 1 and 2). ● It is advisable to prepare the aperture template for every observing night but a template from a different night can also give results of good quality. ● We do not perform background fitting because the apertures in red region overlap and the background fitting cannot be done properly. Fig. 1. Echellogram of Vega. Fig. 2. Echellogram of flat field frame. 19 ● The first step in preparation of the Aperture template is the identification of the apertures. This is done through the task apall. ● In the whole echelogram from the OES, 56 apertures can be identified. However, the orders in red region overlap and there is a heavy fringing present above ~7000 A. Red region is not optimal for precise radial velocity determination due to the presence of the telluric lines. On the other side, there is a very low signal in blue part of the spectrum. Thus, it is not necessary to extract all available aperture - depends on the purpose. ● The width of apertures should not be larger than 5 px (lower=-5.0 upper=5.0), otherwise the orders overlap in the red part of the spectrum and the apertures in blue are approximately of this size. ● Apertures can be found automatically but one should always check the identification visually because the automatic procedure often fails (note the gap between apertures 6 and 7 in Fig. 1) and apertures have to be adjusted manually. It is seen that the blue apertures are more prominent on the echellogram of an early-type star (Fig. 1) than in the flat-field frame (Fig. 2). Fig. 1. Automatically identified apertures (40) in the echellogram of Vega. Fig. 2. Automatically identified apertures (40) in the echellogram of flat field frame. Aperture template preparation 20 ● The apertures are identified by pressing ‘m’, deleted by ‘d’, centered by ‘c’ and ‘g’, and the order sequence can be renumbered by putting cursor on the first aperture and pressing ‘o’. When finished with the identification of the apertures, one quits by pressing ‘q’ and proceeds to the next step which is aperture tracing (after 3 times saying ‘yes’). ● It is advisable to select always the same starting aperture and the same number of apertures to have the final data always of the same wavelength range from every night. This is very important. If the frames have different apertures, it is basically impossible to perform the wavelength calibration without manual redefining of the apertures in the definition files in the database folder, which is quite time consuming and tough task. Fig. 1. Well-prepared aperture template with adjusted apertures. Aperture template preparation 21 ● After the identification, the aperture tracing starts. This step consists of the polynomial fitting of the echelle orders. Chebychev or Legendre polynomials of the order of 5-20 usually give good results with residuals below RMS<0.05 px. ● The order of polynomials is changed by ‘:o number’. Fig. 1 shows the shape of the 39th aperture from the previous slide (rows versus columns on the CCD chip). ● It is recommended to check the residuals after the fit by pressing ‘k’, which shows the normalized residuals or by pressing ‘j’ which shows residuals in px. The overall shape of the aperture can be displayed by pressing ‘h’, outliers can be deleted by pressing ‘d’. When satisfied with the fit, one quits by pressing ‘q’ and fitting of the next order starts. ● It is a good idea to find a proper name for the aperture template and use it every time. Aperture template preparation Fig. 1. The 39th aperture (left panel), residuals in px (middle panel), relative residuals (right-hand panel) 22 Normalized flat ● The pixels in the apertures have different sensitivity, thus, we prepare the normalized flat using task apflatten. ● As an input we use mflat that we have from one of the previous steps and as the reference frame we use aperture template we prepared in the previous step. I suggest to leave edit=yes that allows a fine centering of the apertures. The default recentering only shifts all the apertures of a constant shift according to the aperture template. By pressing ‘a’ during the aperture definition and subsequent ‘c’, all the apertures will be individually centered to the best position. ● The normalization of the flux is via fitting the blaze function with polynomials. We usually use Legendre or Chebychev polynomials of the order 5-20. The red and infrared part of the spectra can be difficult to normalize due to heavy fringing. Fig. 1. Position of the apertures of the master flat frame according to the aperture template. Fig. 2. Blaze function fitting. In the detail, relative residuals are shown. 23 Aperture extraction (scientific frames) ● This step follows after the bias is subtracted from the scientific frame and it is divided by nflat. ● This time, aperture extraction (task apall) is run on the scientific frames. It is recommended not to do this step fully automatically, check the proper identification of the apertures and adjust centering of apertures (by pressing ‘a’ and ‘c’). Alternatively, the individual centering of apertures can be done by having value of shift=no (detail of Fig. 1). Fig. 1. Apertures identified in the scientific frame. The left details show the effect of pressing ‘a’ and ‘c’ (proper centering of the apertures) 24 After pressing ‘a’ and ‘c’ Extraction of ThAr comp spectra ● This step is quite unusual but not unique among other observatories. Usually, all the comparison (calibration) ThAr spectra are extracted using the same aperture template as for the scientific frames. Subsequently, comparison ThAr spectra taken before and after a scientific exposure are used for the wavelength calibration. However, this, classical approach, always led to trends in radial velocities during nights in the order of ~1 km/s (Fig. 1). In addition, there is no correlation between the final radial velocity and the relative pixel shift of the comparison spectra which can be expected (Fig. 2). Fig. 1. Radial velocities using classical approach (interpolation between ThAr spectra before and after the exposure). Fig. 2. Radial velocity together with the pixel shift of the reference ThAr frame (multiplied by 10). There is no apparent correlation. 25 Extraction of ThAr comp spectra ● The solution is in usage of only one ThAr spectrum obtained in the end of the night (the last of the sequence of ten frames - the most stable spectrum). This approach removes the trends during the night. For every scientific frame, a unique comparison spectrum is extracted by using the particular scientific frame as a unique aperture template. This secures that the apertures of the comparison spectrum are extracted exactly at the position where the scientific apertures are extracted, which eliminates possible shifts (in the appall task, recente=no and edit=no). However, due to drift of the echellogram during night, this can lead to a poor extraction of the ThAr spectra. In such case, recente=yes is necessary. The application of the described approach needs to be further tested and investigated. ● Even better results in radial velocities can be achieved when the apertures of the scientific frames are narrow and, subsequently, also the apertures of the ThAr spectrum are narrowed to +-1 px (Fig. 1). This helps to suppress the impact of the tilt of the lines (Fig. 2). Warning: 1) This works only for bright stars. 2) It is not sufficient to narrow the aperture by adjusting lower and upper keywords in the apall task. It is necessary to change the limits in the definition files of the scientific frames in the database folder (either manually or by some script). Fig. 1. The effect of using one ThAr spectrum for the wavelength solution (no trend compare with the figure in the previous slide) in combination with narrow apertures of the scientific frames and ThAr frame. The scatter is lower for smaller apertures. Fig. 2. Part of the ThAr chellogram showing the tilted lines. The nonuniform tilt in combination with large aperture causes additional scatter in the radial velocities. 26 Template for the wavelength calibration ● Similarly as in the aperture extraction, one can prepare a separate calibration template for every scientific frame separately. However this is extremely time consuming. Thus, it is better to prepare a template and then use it for the automatic identification of the ThAr lines in the spectra. The best way to prepare such template is to extract apertures from the comp frame using the aperture template frame (or any other frame with very high SNR). ● The calibration is performed using task ecidentify. The emission spectral lines of Th and Ar with known wavelengths are identified by pressing ‘m’ (Fig. 1), can be deleted by ‘d’. Moving between apertures is done by pressing ‘j’ and ‘k’, . ● The list with lines can be put in the working directory but the default lists are stored in Iraf directory /noao/lib/linelists. There are a few atlases of spectral lines (e.g. https://www.noao.edu/kpno/tharatlas/thar/) but it is quite difficult to identify the lines in the orders of OES with these atlases. Thus, we prepared an atlas specially for the OES in 2018 (between 4000 and 7000 A, Fig. 2). Since April 2020, there is a nice atlas covering 3840-7600 A at http://stelweb.asu.cas.cz/~slechta/2mEN/echelle/. Fig. 1. The identified lines in one of the apertures (marked by small vertical lines above the emission ThAr lines). Fig. 2. An example of the third aperture of the ThAr atlas for OES. 27 Template for the wavelength calibration● To get best possible precision of the radial velocities, it is advisable to identify at least 3 well separated and prominent lines in every order (two on the opposite edges, one in the center). In other spectrographs, it is suggested that only lines in a few apertures in red, blue and green should be identified but our experience shows that without identification of lines in every order, the radial velocities and the telluric correction of the OES data is not optimal. ● It is advisable to save partial results in the line identification process (by pressing ‘q’), for example, after finishing identification in every 5-10 apertures. It can happen easily that one presses key ‘l’ which automatically identifies lines. This apparently spoils the wavelength solution because the fit of the dispersion function (relation pixels-Angstroms) has not been established yet. In such case, the identification needs to be done again from scratch, or starting with the previously saved file. ● When enough lines in every order are identified, one can start fitting the dispersion function by pressing ‘f’. It is always advisable to start with small order of the polynomial (xo=2, yo=2). When starting with larger order polynomial, one risks that the solution will be corrupted. ● The process of dispersion function fitting is an iterative process of deleting outliers (by pressing ‘d’) and fitting (key ‘f’). Our experience iteration should be done more often by removing small number of outliers rather than delete lots of outliers at once and then re-fit. Such solution can be corrupted. ● When there are no obvious outliers, one should increase the order of polynomial and repeat the previous point. We end up at the order 6-7 for both x and y axes when the RMS of the solution is below 0.002 A. After reaching this point, one presses ‘q’ to quit the fitting and can press ‘l’ which automatically identifies new lines (default number is 100 lines, we use a few thousands, usually 2000). Then the fitting process needs to be repeated to get a good solution with RMS<0.002 A. xo=2, yo=3 xo=3, yo=3 xo=4, yo=3 xo=6, yo=3 After a few additional fitting steps, automatic identification of lines and new iterative fitting the final wavelength solution is found Fig. 1. Some of the steps in the iterative fitting of the dispersion function with the final fit on the right. 28 Template for the wavelength calibration ● After finding a good solution of the dispersion function it is recommended to check visually the identification of the lines (and the difference between laboratory and observed wavelength) at least in a few apertures. The most differing lines can be deleted directly by ‘d’. ● The template must be prepared with exactly the same number of apertures at the same positions as the scientific frames. This means that if you have a template with different number of apertures (or apertures starting at different position) from the past, you need to prepare a new ThAr template. There is a work around by modifying definition files in the database folder but this is very time consuming and one can make a mistake easily. ● Similarly as for the aperture template, It is recommended to find a suitable name for the ThAr template and strictly use it. Fig. 1. Initial identification of the lines (left panel) and lines automatically identified after pressing ‘l’ and iterative fitting with outlier removal (right-hand panel). Note that the horizontal axis is in pixels on the left, while in Angstroms on the right. These numbers show the observed (left) and laboratory wavelengths (right) of the line marked with the cursor. Jumping from line to line can be done by ‘n’ and ‘-’. 29 Wavelength calibration ● When the wavelength calibration template is ready to use, one can identify the lines for every ThAr spectrum from the night automatically by applying task ecreidentify. ● The indicators of the goodness of the automatic identification of the ThAr emission lines are the numbers of identified lines (‘Found’ in Fig. 1 and 2), number of fitted lines (‘Fit’ in Fig. 1 and 2) and most importantly RMS. All these numbers must be almost the same as for the ThAr template. Also the x-shift can be a kind of indicator but it is not trustworthy. ● One can use a ThAr template from a different night but only when considering the previous points. Fig. 1. Example of well refitted wavelength solutions. The number of the lines is almost or exactly the same as is in the ThAr template, the RMS is close to 0.002 (RMS of the template). Fig. 2. Example of poorly refitted wavelength solutions. The number of the lines differs from the number of lines in the ThAr template, the RMS is far from 0.002 (RMS of the template). 30 Wavelength calibration Fig. 1. Example of well refitted wavelength solutions. The number of the lines is almost or exactly the same as is in the ThAr template, the RMS is close to 0.002 (RMS of the template). Fig. 2. Example of poorly refitted wavelength solutions. The number of the lines differs from the number of lines in the ThAr template, the RMS is far from 0.002 (RMS of the template). The values of all parameters in Fig. 1 are close to the values of the template, thus one can thrust the solutions. However, because the RMS is a bit higher than 0.002 A, it is advisable to check every comp spectrum individually through ecidentify task and refit the solution. In case of such small differences in RMS, typically only a few lines need to be deleted to adjust the solution. The values of all parameters in Fig. 2 are quite different from the values of the template, thus, one should not use these solutions. Typically, there is some issue with the extracted ThAr spectra - some apertures are corrupted (Fig. 3). It is important to identify the problem (most likely with the aperture extraction of the scientific and ThAr frames) and re-do the analysis, if needed. Fig. 3. Example of poorly ‘ecreidentified’ ThAr spectrum. The left panel shows the 6th aperture which is all right, while the right-hand panel shows corrupted 7th aperture with no ThAr spectrum. The wavelength solution is, thus, wrong. 31 Wavelength calibration ● When the dispersion function of all comparison spectra are well defined, one needs to pair the scientific frames with the corresponding ThAr spectra. This is done in the task refspectra. ● The wavelength calibration of the scientific frames is done through the task dispcor. Fig. 1. An example of aperture containing Hβ region before the wavelength calibration (left panel) and after calibration (right-hand panel). 32 Normalization of the spectra ● Before the normalization of the spectra starts, it is advisable to splot the wavelength-calibrated spectra one by one and go through all the apertures to see the extracted and calibrated spectrum. By doing this, one can delete the artefacts and other unwanted features from the spectra (e.g. the relics from the cosmics removal as apparent in Fig. 4 on slide #15). ● The area to be linearly interpolated can be defined by pressing ‘x’ at the starting and final position of the region that should be replaced. The changes can then be saved by pressing ‘i’ and typing the name. This small additional step will help to better normalize the final spectrum. Fig. 1. The spectrum before (left panel) and after (right-hand panel) the artefact was removed. Note: Be sure that you are not removing real emissions! 33 Normalization of the spectra● Normalization using task continuum usually works well for low-SNR spectra. However, it is always better to perform the normalization manually while keeping visual supervision for every aperture individually. The same order of the fitting function can work well for one aperture but can give bad result in other aperture. ● When normalizing hot stars, it is advisable to define parts of the spectra that should be fitted (outside the broad lines) by pressing ‘s’ at the beginning and end of the region. Such regions can be deleted by pressing ‘t’. ● The parameters that have the major impact on the fitting are: order, low_rej, high_rej, and naverage. In early type stars with low number of lines and/or spectra with high SNR, it is advisable to use naverage~10, while in stars with lots of spectral lines, naverage~3 gives usually good results. ● Good normalization can usually be achieved with chebychev or legendre polynomials of low order (typically 4-20). Order 5 Order 6 Fig 1. The left panel shows the non-normalized spectra with the fit with the 6-th order Chebychev polynomial. The top right-hand panel shows the residuals after the fit with the polynomial of order 5, while the bottom right-hand panel shows the fit with the order 6. Apparently, order 6 gives better result (but still not optimal). 34 1-d spectrum preparation● The spectra after normalization are still in a form of 2-d spectrum, i.e., the the spectrum is separated to apertures. This is fully sufficient for most of the types of analysis. Actually, working with apertures is even better, because different apertures have different resolution and merging degrades it. However, at least for a better manipulation and the overall idea, it is good to produce merged 1-d spectrum. It is also sufficient for determination of radial velocities larger than ~1 km/s. ● Merging is done through a few steps: a. Calculation of the normalization function b. Merging of the non-normalized spectra c. Merging of the normalization functions d. Division of the spectra from points b and c. Fig. 1. 1-D merged spectrum. The drops (and spikes) in the red part of the spectrum are due to non-overlapping apertures and non-optimal normalization. Other spikes between 4500 and 6000 A are artifacts after cosmic rays removal. The spike at ~4300 A is due to bad normalization. Merged non-normalized spectrum Merged normalization functions 35 Final corrections● The drops between the apertures and other artefacts can be removed manually, and/or with the help of task imreplace, and/or by using some external script. ● The exposure time of the merged spectrum needs to be adjusted back (the resulting exposure time is nrOfApertures X originalExposureTime). In addition, it is good to add the time of the mid-exposure calculated from UT and darktime. Darktime gives better idea of the real length of the exposure - if there is a break in the exposure, the darktime gives the total time of the exposure. Fig. 1. The final 1-d spectrum corrected for the features shown on the previous slide. Note: If some features similar to the one at 4300 A appear in the spectrum, normalization needs to be done again in a proper way. This is only an example and it is left there intentionally. 36 Semi-automatization● This is an example of the commands that can be used in .cl scripts (it uses naming that we adopted). ● In combination with bash scripts, the whole data reduction process for data observed in a single night takes from a few to a few tens of minutes depending on whether the templates are already prepared from the past. ● For the preparatory and final works (e.g. preparation of the lists) and manipulation with the fits headers we use external bash and python scripts. # ============================================================================================================================== # Bad-pixel correction, cosmic removal, trimming of the frames, making of the master bias (mbias) and master flat frames (mflat) # ============================================================================================================================== fixpix images=@All_frames.list masks=badpixmask # bad-pixel correction cosmicrays input=@frames.list output=@frames.list # cosmics removal imcopy *fit[5:2039,500:1749] *fit # trim of the frames zerocombine input=@bias.list output=mbias # making of the master bias imarith operand1=@flat.list op=- operand2=mbias result=@b_flat.list # subtraction of bias from flat field frames flatcombine input=@b_flat.list output=mflat # making of the master flat # ============================= # making of the normalized flat # ============================= apflatten input=mflat output=nflat referen=Aper_Template interac=yes find=no recente=yes resize=no edit=yes trace=no fittrace=no flatten=yes fitspec=yes # ==================================== # mbias subtraction, division by nflat # ==================================== imarith operand1=@comp.list op=- operand2=mbias result=@bf_comp.list imarith operand1=@bf_comp.list op=/ operand2=nflat result=@bf_comp.list imarith operand1=@obj.list op=- operand2=mbias result=@bf_obj.list imarith operand1=@bf_obj.list op=/ operand2=nflat result=@bf_obj.list # =============================================== # extraction of the apertures (scientific frames) # =============================================== apall input=@bf_obj.list referen=Aper_Template interac=yes find=no recente=yes resize=no edit=yes trace=no fittrace=no extract=yes extras=no review=no # ======= Extraction of the ThAr spectrum ======= apall input=bf_cyyyymmdd00xx output=xxcomp.ec referen=bf_cyyyymmdd00xx interac=no find=no recente=no resize=no edit=no trace=no fittrace=no extract=yes extras=no review=no # needs to be done for all scientific frames # ====== ecreidentification ======== ecreidentify images=@comp.ec.list; referenc=ThAr_Template refit=yes # ====== refspectra ================ refspectra input=@obj.ec.list referen=@comp.ec.list override=yes # ======== dispcor ================= dispcor input=@obj.ec.list output=@obj.ecd.list # ======== continuum fitting ======== continuum input=@obj.ecd.list output=@cont.list interac=yes naverag=10 functio=chebychev order=5 niterat=10 # ======== merging the orders to 1-d spectrum ============== sarith @obj.ecd.list / @cont.list @normf.list scombine @obj.ecd.list output=@ec1d.list group=images combine=sum scombine @normf.list output=@normf1d.list group=images combine=sum sarith @ec1d.list / @normf1d.list @cont1d.list 37