Version 1 of the HSC contains members of the WFPC2, ACS/WFC, WFC3/UVIS, and WFC3/IR Source Extractor source lists from HLA version DR8 that were public as of June 1, 2014. Data from the ACS/HRC (High Resolution Camera) are NOT included in the HSC at the current time. In addition, approximately 35% of the ACS/WFC, WFPC2, and WFC3 images in HLA DR8 are not included in the HSC due to image quality problems and other issues.
We have developed better methods for generating source lists that were applied to the WFC3 images in this release. We will apply those methods to ACS/WFC and WFPC2 data in the future.
The three primary new items for Version 1 (superseeding Beta 0.3) are:
2. Coverage can be very non-uniform (unlike surveys like SDSS), since pointed observations from a wide range of HST instruments, filters, and exposure times have been combined. However, with proper selection of various parameters (e.g., NumImages included in a match), this non-uniformity can be minimized in most cases. In the example below, the first image (with NumImages > 10) shows a very nonuniform catalog, while the second image (with NumImages > 3) is much better.
3. WFPC2 and ACS source lists are of poorer quality than WFC3 source lists. As we have gained experience, the HLA source lists have improved. For example, many of the earlier limitations (e.g., depth, difficulty finding sources in regions of high background, edge effects, ...) have been fixed in the WFC3 source lists. These improved algorithms will be included for the WFPC2 and ACS/WFC, and will be incorporated into a future release of the HSC.
4. The default is to show all HSC objects in the catalog. This may include a number of artifacts. You can request Numimages > 1 (or more) to filter out many of the artifacts. In the example below we show part of the Hubble Deep Field (HDF). With NumImages > 0 on the left, a number of the points look questionable (e.g., several of the circles to the right of the red cross have NumImages = 1 or 2, while even the faint sources have NumImages in the range 10 - 20 for the HDF).
5. The default is to use MagAper2 (aperture magnitudes), generated using the Source Extractor (Bertin & Arnouts 1996)
software. If required for your science needs, you will need to add aperture
Aperture Corrections Table
to estimate total magnitudes for point sources. You can also request MagAuto values (e.g., from the HSC Search Forms or HSC CasJobs) if you would like to use the Source Extractor algorithm for estimating the total magnitude, especially for extended sources.
All HLA (and hence HSC) magnitudes are in the ABMAG system. Here is a discussion of the ABMAG, VEGAMAG and STMAG systems. A handy, though not exact conversion for ACS is provided in Sirianni et al. (2005). The Synphot package provides a more generic conversion mechanism for all HST instruments.
Known Problem # 1
- PC and WFC objects are combined for the WFPC2
Aperture magnitudes for objects on the PC (Planetary Camera) portion of the WFPC2 differ (typically by 0.2 to 0.3 magnitudes) when compared to the same object when it is on the WF (Wide Field) portion of the WFPC2, resulting in additional scatter for the WFPC2 data. This is caused by a variety of reasons including: 1) the pixel scale on the PC is half the size of the WF - while resampling adjusts for this to some degree, it does not fully compensate, 2) Charge Transfer Efficiency loss (CTE) is much higher on the PC due to the lower flux in the smaller pixels, and the lower background. In the future the two chips will be separated in the HSC. In the meantime, a CASJOBS use case will be developed to instruct users how they can do this themselves if required.
Known Problem # 2
- Inaccurate photometry for ACS images with a wide
ranges of exposure times (e.g., 0.5 and 500 seconds).
Combined ACS images with wide ranges in exposure times have problems for pixels that are saturated, due to improper cosmic ray rejection. This is generally obvious from a search for large value of the Sigma values for magnitudes in the HSC Summary Search Form. The figure on the right (from Whitmore et al. 2015) shows an example of this problem for the two panels where A_F606W measurements are involved in a comparison of MagAper2 values from the HSC in the globular cluster M4. We are currently investigating whether there are similar problems for any WFC3 or WFPC2 measurements, but our current expectation is that these instruments are not affected. The problem will be corrected in the future.
Known Problem # 3
- AbsCor may be incorrect near the poles.
The Absoute Correction flag (i.e., AbsCor = Y/N) indicates whether the astrometric position of a particular source has been corrected to match the Pan-STARRS astrometric "backbone". There is an error in the Version 1.0 algorithm that only becomes important near the poles, resulting in unreliable determinations. This affects less than 1% of the data, and will be corrected in the future.
Known Problem # 4
- The Concentration Index
(CI) for different instruments have
This results in larger scatter and (typically) small changes in the normalization when determining the mean values of CI for the HSC Summary Search Form. While CI values can still be used to help separate stars and extended sources, care must be taken in determining the best values since they may change for different datasets.
Known Problem # 5
- CTE loss corrections are made for WFPC2 but not for ACS or WFC3/UVIS.
Corrections for Charge Transfer Efficiency (CTE) loss are made for WFPC2 based on the Dolphin (2009) formula. Similar corrections have NOT been made in the HSC for the other instruments. However, a pixel-to-pixel approach to CTE correction has been developed in the past few years by Anderson & Bedin (2010) and has been incoporated into the HST calibration pipeline for ACS. It will be incorporated into the WFC3/UVIS pipeline in the near future. Future versions of the HSC will use these new CTE corrected images for ACS and WFC3/UVIS.
Known Problem # 6
- Slightly different flag limits for inclusion of WFC3 sources compared to WFPC2 and ACS sources.
By design, only valid detections are included in the HSC (i.e., data with quality flag values less than or equal to 5; see HLA Source List FAQ for a definition of the flagging system for HLA source lists). However, there is a slight difference between the limits set for WFC3 (i.e., <=5) compared to WFPC2 and ACS (i.e., <5). This will be fixed in the next version of the HSC. Less than 1% of the WFC3 sources will be affected.
The Hubble Source Catalog (HSC) is based on HLA Source Extractor (Bertin & Arnouts 1996) source lists. To build these source lists, the HLA first constructs a "white light" or "detection" image by combining the different filter observations within each visit for each detector. This filter-combined drizzled image provides added depth. Source Extractor is run on the white light detection image to identify each source and determine its position.
Next, the combined drizzled image for each filter used in the detection image is checked for sources at the positions indicated by the finding algorithm in Source Extractor. If a valid source (flags less than 5) - is detected at a given position then its properties are entered into the HLA source list appropriate for the visit, detector, and filter. (See HLA Source List FAQ for a definition of the flagging system for HLA source lists - also see Known Problem # 6 above for a caveat). These are defined as level 0 detections, and are reported in the HSC Detailed Search Form to have a value of Det = Y.
Sources that are found in the white light detection image, but not in a particular filter used to make the white light image, are regarded as "filter-based nondetections". These can be examined by asking for level = 1 under the Detection Options on the HSC Detailed Search form.
It is also possible to use the Detection Option in the HSC Detailed Search Form to retrieve Level = 2. This Includes detections, filter-based nondetections, and visit-based nondetections. Visit-based nondetections are cases where an image overlaps with the specified positional search constraints, but no sources are detected there. Visit-level nondetections have a Det value of N and no assigned MatchID value when viewing the HSC Detailed Search Form.
Note that corrections for Charge Transfer Efficiency (CTE) problems have been made to the WFPC2 HLA source lists, but not the ACS nor WFC3/UVIS source lists.
The source detections (and nondetections) that correspond to the same physical object (as determined by the algorithms defined in Budavari & Lubow 2012) are given a unique MatchID number and an associated match position (MatchRA, MatchDec). Each member of the match, including nondetections, also has an assigned MemID value and a source position (SourceRA, SourceDec). As part of the matching process, astrometric corrections are made to overlapping images. Each source detection and nondetection has a separation distance, D (small d in the plots below), from the match position.
The two plots show (in blue) the distribution of the relative astrometric errors in the HSC corrected astrometry, as measured by the positional offsets of the sources contributing to multi-visit matches from their match positions. Plotted in orange are the corresponding distributions of astrometric errors based on the original HST image astrometry. The areas under the blue and orange curves in the left plot should be the same, but are not due to the long tail in the orange curve that extends beyond the limits of the plot. The left plot is on a much smaller distance scale than the right plot. Furthermore, the right plot has a logarithmic vertical scale. The peak (mode) of the HSC corrected astrometric error distribution is 2.3 mas, while the median offset is 8.0 mas. The original (current HST) astrometry has corresponding values of 9.3 mas and 68 mas, respectively. To summarize, the relative astrometric error distribution in the original HST images has a long tail that has been greatly reduced by the HSC corrections.
Although the primary way to access the HSC is through the MAST Discovery Portal, as discussed in the next section, we begin with a look at the HSC Detailed and Summary Search Forms, since it helps clairfy what we mean by a Match in the HSC.
Here is a portion of some output from the HSC Detailed Form showing the matches for MatchID 22214697. Each source detection (i.e., SourceID) and nondetection (Det = N) has some separation distance, D, from the match position. The Dsigma value is the standard deviation of the D values, and is also reported in the HSC Summary Search Form.
Many of the detections in the Detailed Search Form are in matches that involve a single visit and detector. These cases have D=0 and Dsigma=0.
The HSC Summary Search Form includes results for all detections for a given
match on a single row. The magnitudes for different visits are
averaged together, along with a variety of other information including.
Example Output - Left portion:
Example Output - Right portion:
More information about the individual detections that went into a given match can be obtained by searching the Detailed Search Form for the appropriate MatchID value by clicking on the blue MatchID value in the first column of the Summary Search Form.
For a short description of each column, click on "Field Descriptions" in the upper right of the HSC Summary Search Form. Slightly more detailed descriptions for some of the key columns (not already discussed for the Detailed Search Form above) are included below. For even more detailed information the HLA FAQ - About Source Lists is available.
NumImages = Number of separate images in a match. Often used to filter out artifacts (e.g., NumImages > 1 will remove most cosmic rays).
AbsCor = Was it possible to correct the absolute astrometry (Y/N) by matching with Pan-STARRS, SDSS, or 2MASS.
Start MJD = Modified Julian Date (MJD) for the earliest image in the match.
CI (Concentration Index) = Difference in aperture magnitudes using the small aperture (i.e., MagAper1) and large aperture (MagAper2). The aperture radii are: 0.05 and 0.15 arcsec for ACS/WFC and WFC3/UVIS, 0.10 and 0.30 arcsec for WFPC2, 0.15 and 0.45 arcsec for WFC3/IR. CI can often be used to help separate stars from extended sources. However, see the Known Problems FAQ for a caveat.
CI_Sigma = Standard Deviation in the CI values for this MatchID.
KronRadius = Kron Radius in arcsec from the Source Extractor algorithm. IMPORTANT NOTE: The minimum value (i.e., for point sources) is 3.5 pixels, which translates to 0.14 arcsec for WFC3/UVIS, 0.175 arcsec for ACS/WFC, 0.315 arcsec for WFC3/IR, and 0.35 arcsec for WFPC2. Watch for a pileup of measurements at these values that are due to this limit rather than reflecting a real peak.
KronRadius_Sigma = Standard Deviation in the CI values for this MatchID.
Extinction = E(B-V) from Schlegel (1998)
Instrument_Filter = Mean magnitude for all detections in the match, grouped by instrument (A = ACS, W2 = WFPC2, W3 = WFC3) and filter combination. The order is from short to long wavelengths. The values are in the ABmag system. The default is to provide MagAper2 values (i.e.,aperture photometry using the large aperture - see CI description above), but you can also request MagAuto (using the Source Extractor algorithm) values if you are more interested in extended sources.
Instrument_Filter_Sigma = Standard Deviation around mean magnitude for this Instrument_Filter combination.
Instrument_Filter_N = Number of measurements that went into the determination of mean magnitude for this Instrument_Filter combination.
Ang Sep(') = Angular Separation for this match from the position used to make the search.
Discovery Portal was designed to provide "one-stop" web access to the MAST missions (e.g., HST, Kepler, GALAX, FUSE, IUE, EUVE, Swift, XMM, ...), and the even broader world of the Virtual Observatory (VO). Here is a general description of the Discovery Portal and here is the HELP page. With the Version 1 release, the Discovery Portal also becomes the primary way to access the HSC. The tools include:
I looked at the HLA Interactive Display and saw that there were 30,000
objects in the HSC for a specifc image. Why does the Discovery Portal (DP)
query only return 10,000?
Answer: 10,000 is the maximum number of records currently supported for query results in the Discovery Portal. Larger queries can be handled in the HSC CasJobs tool.
Question: Can I import my own catalog of objects into the DP?
Answer: Yes. Under the Select Collection box, click on the icon (Import File).
Question: Where is the plot icon you mention in the use case? I don't see it on the screen.
Answer: Depending on the size of the box with the target position (entering a target position generally makes the box big while entering a target name does not), the AstroView window can COVER UP the icon. By grabbing the side of the AstroView window and dragging to the right, you can shrink the window and reveal the row of icons.
Question: Can I overlay the HSC over HST images?
Answer: Yes. After doing your query, click on the icon under Actions (Load Detailed Results) . This will show you the details for a particular MatchID, including cutouts for all the HST images that went into this match. If you then click on the third icon under Action (Toggle Overlay Image) (blue circle in the image - see Use Case # 1), the HST image will be displayed over the DSS image in AstroView
Question: Can I bring up the HLA interactive viewer from the DP?
Answer: Yes. After doing your query, click on the first icon under Actions (Load Detailed Results). This will show you the details for a particular MatchID, including cutouts for all the HST images that went into this match. If you click on the cutout (green circle in the image - see Use Case # 1) , the HLA interactive viewer will come up.
Question: Can I change the scaling in the AstroView window?
Answer: Not at this time.
Question: Can I center the field on a selected target?
Answer: Yes, click on the bulls-eye icon for the target of interest.
The primary purpose of the HSC Catalog
Archive Server Jobs System
is to permit large queries,
phrased in the Structured Query Language
(SQL), to be run in a batch queue.
CasJobs was originally developed by the Johns
Hopkins University/Sloan Digital Sky
Survey (JHU/SDSS) team. With their
permission, MAST has used version 3.5.16
of CasJobs to construct three
CasJobs-based tools for GALEX, Kepler,
and the HSC.
While HSC CasJobs does not have the limitations of only including a small subsample of the HSC (i.e., 10,000 objects), as is the case for the MAST Discovery Portal, it also does not have the wide variety of graphic tools available in the Discovery Portal. Hence the two systems are complementary.
This figure provides a demonstration of the speed and power of the HSC CasJobs interface. Starting from scratch, imagine how long it would take to construct a color-magnitude diagram for all Hubble observations of the Small Magellanic Cloud (SMC). A search of the HLA shows 7,289 observations in this region, 1366 of them with ACS. With HSC CasJobs, a color-magnitude can be made for ACS data in less than two minutes.
Casjobs also provides a personal database (i.e. myDB) facility where you can store output from your queries and save stored procedures and functions. This powerful aspect of Casjobs can also be used as a group sharing facility with your collaborators.
Why does my query give me an error saying that the function
I was using (e.g. SearchSumCatalog) is an invalid object
Answer: This error usually indicates that the context is set incorrectly. Check that the context says HSC and not MyDB. This is probably the MOST FREQUENT PROBLEM people have with HSC CasJobs. (i.e., the blue oval under Context in the image - see use case # 2 for more details)
Question: I created an output catalog a while ago, and when I go to the Output tab it is no longer there. What happened to it?
Answer: There is a lifetime for output of 1 week.
Question: When I tried to plot my table, I got an error message saying that the input string was not in a correct format. What is wrong?
Answer: If there are any entries that have non-numeric values (such as NaN indicating no data), the plotting tool cannot handle them. The solution is to restrict your queries to only real numbers by adding a condition such as: A_F606W > 0 which will only include sources with measured values.
Question: What does 'Query results exceed memory limitations' mean?
Answer: This means the result of the query which you've submitted is greater than the memory buffer will allow. This message only applies to 'quick' queries; queries using 'submit' do not have any memory restrictions. The easiest thing to do is just use submit instead of quick.
Question: How do I see all the searches I have done?
Answer: The History page will show all the queries you have done, as well as summary information (submit date, returned rows, status). If you click on Info, you can see the exact query that executed, and resubmit the job if you like.
Question: How do I see all the results I have generated?
Answer: The MyDB page shows you all the tables you have generated. Clicking on the table name will show you the details of the table.
Question: How do I see all the available HSC tables, functions, and procedures?
Answer: Click on the MyDB at the top of the page, and then set the context to HSC. Clicking on the Views (currently empty), Tables, Functions, and Procedures will list the available material. Clicking on specific tables or functions will provide more information. (NOTE: When looking at functions, the default is to show the source code. To see the description instead, click on "Notes" near the top of the page.)
Question: Where are some example queries to run ?
Answer: Several of the HSC Use Cases have examples (e.g., #2 and #5). Another good place to look is by clicking the Samples button after you hit the Query button (e.g., provides examples of cross matching and making histograms). There are also pointers to SDSS training materials on the left of the HSC HELP page.
Question: Where can I find the schema information for various HSC tables, functions and procedures (i.e., similar to the SDSS SkyServer Schema Browser)?
Answer: To get the schema for different HSC tables, function and procedures, first go to the top of the page and click on MyDB to bring up the database page. Next go to the drop down menu in the upper left and select HSC as the "context". Now click on one of the tables to see its schema information. To view schema for views, functions and procedures, click on the appropriate link below the context menu.
Where can I find definitions of the columns in the HSC Summary or Detailed Search Forms?
Answer: For a short description of each column, click on "Field Descriptions" in the upper right of the Search Form. Slightly more detailed descriptions for some of the key columns are included here. For even more detailed information the HLA Source List FAQ is available.
Can a list of targets be used for a HSC search?
Answer: Yes - A list of targets can be searched using the "File Upload Form" button, which is located near the top right of either the HSC Summary or Detailed Search forms. The "Local File Name" box (or "Browse" pull-down menu) allows you to provide the name of a file listing the targets you would like to include in the search. A number of different format options for the input file are allowed, as defined in the target field definition portion of the form.
The Hubble Source Catalog is composed of visit-based, general-purpose source lists from the Hubble Legacy Archive (hla.stsci.edu) . While the catalog may be sufficient to accomplish the science goals for certain projects, in other cases astronomers may need to make their own catalogs to achieve the optimal performance that is possible with the data (e.g., to go deeper). In addition, the Hubble observations are inherently different than large-field surveys such as SDSS, due to the pointed, small field-of-view nature of the observations, and the wide range of instruments, filters, and detectors. Here are some of the primary limitations that users should keep in mind when using the HSC .
Uniformity: Coverage can be very non-uniform (unlike surveys like SDSS), since a wide range of HST instruments, filters, and exposure times have been combined. We recommend that users pan out to see the full HSC field when using the Interactive Display in order to have a better feel for the uniformity of a particular dataset. Adjusting the value of NumImages used for the search can improve the uniformity in many cases. See image below for an example.
Astrometric Uniformity: Astrometric uniformity: About 85% of HSC images have coverage in Pan-STARRS, SDSS, or 2MASS that permits absolute astrometric corrections of about 68% of the images (i.e., AbsCor = Y) to a level of between 0.1 to 0.2 arcsec.
Depth: The HSC does not go as deep as it is possible to go. This is due to a number of different reasons, ranging from using an early version of the WFPC2 catalogs (see FAQ ), to the use of visit-based source lists rather than a deep mosaic image where a large number of images have been added together.
Completeness: The current generation of HLA WFPC2 and ACS Source Extractor source lists have problems finding sources in regions with high background. The WFC3 sources lists are much better in this regard. The next generation of WFPC2 and ACS source lists, which will use the improved WFC3 algorithms, will be incorporated into the HSC in the future.
Visit-based Source Lists: The use of visit-based, rather than deeper mosaic-based source lists, introduces a number of limitations. In particular, much fainter completeness limits, as discussed in Use Case # 1. Another important limitation imposed by this approach is that different source lists are created for each visit, hence a single, unique source list is not used. A more efficient method would be to build a single, very deep mosaic with all existing HST observations, and obtain a source list from this image. Measurements at each of these positions would then be made for all of the separate observations (i.e., "forced photometry"). This approach will be incorporated into the HSC in the future.
False Detections: Uncorrected cosmic rays are a common cause of blank sources. Such artifacts can be removed by requiring that the detection be based on more than one image. This constraint can be enforced by requiring NumImage > 1.
Another common cause of "false detections" is the attempt by the detection software to find large, diffuse sources. In some cases this is due to the algorithm being too agressive when looking for these objects and finding noise. In other cases the objects are real, but not obvious unless observed with the right contrast stretch and field-of-view. It is not easy to filter out these potential artifacts without loosing real objects. One technique users might try is to use a size criteria (e.g., concenteration index = CI) to distinquish real vs. false sources.
HLA pipeline is designed to correct the inherent ~ 2 arcsec astrometric
uncertainty (due to uncertainties in guide star
positions) by comparing with Pan-STARRS, SDSS, or 2MASS positions. While this
works in the vast majority of cases, it does not work for a
few percent of the data for a variety of reasons (e.g.,
very crowded fields, high backgrounds, few or no
sources in a field). The result in many cases is doubling (or tripling in some cases), as shown in the
upper right for the image on the left.
Most of the doubling has been fixed for the Version 1 release, but users should still be on the lookout.
As always, users should overplot the HSC on an HST image to look for artifacts, such as doubling.
|Beta 0.2 version of HSC||Version 1 version of HSC|
Mismatched Sources: The HSC matching algorithm uses a friends-of-friends algorithm, together with a Bayesian method to break up long chains (see Budavari & Lubow 2012) to match detections from different images. In some cases the algorithm has been too aggressive and two very close, but physically separate objects, have been matched together. This is rare, however.
Bad Images: Images taken when Hubble has lost lock on guide stars (generally after an earth occultation) are the primary cause of bad images. We attempt to remove these images from the HLA, but occasionally a bad image is missed and a corresponding bad source list is generated. A document showing these and other examples of potential bad images can be found at HLA Images FAQ. If you come across what you believe is a bad image please inform us at email@example.com
Yes - HLA Images FAQ. Here is a figure from the document showing a variety of artifacts associated with very bright objects.
Due to the diversity of the Hubble data, this is a hard question to answer. We have taken a three-pronged approach to address it. We first examine a few specific datasets, comparing magnitudes directly for repeat measurements. The second approach is to compare repeat measurements in the full database. While this provides a better representation of the entire dataset, it can also be misleading since the tails of the distributions are generally caused by a small number of bad images and bad source lists. The third approach is to produce a few well-known astronomical figures (e.g., color-magnitude diagram for the outer disk of M31 from Brown et al 2009) based on HSC data, and compare them with the original study. See Whitmore et al. (2015) for a more detailed treatment of these and other cases.
For our first case we examine the repeat measurements in the globular
cluster M4. For this study, as well as the next two, we use MagAper2 values (i.e., aperture magnitudes),
which are the default
for the HSC. In the last
example (extended galaxies) we use MagAuto values.
The figure shows that in general there is a good one-to-one agreement for repeat measurements using different instruments with similar filters. Starting with the best case, A-F814W vs W3-F814W shows excellent results, with a slope near unity, values of RMS around 0.04 magnitudes, and essentially no outliers. However, an examination of the W2-F814W vs. W3-F814W and A-F814W vs W2-F814W comparisons show that there is an issue with a small fraction of the WFPC2 data. The short curved lines deviating from the 1-to-1 relation show evidence of the inclusion of a relatively small number of slightly saturated star measurements (i.e., roughly 5 % of the data). See Whitmore et al. (2015) for more details.
We now turn to our second approach; looking at repreat measurements for the entire HSC database. The following figure shows the distribution of comparisons between independent photometric measurements of pairs of sources that belong to the same match and have the same filter in the HSC for Version 1.0. The x-axis is the flux difference ratio defined as abs(flux1-flux2)/max(flux1,flux2). The y-axis is the number of sources per bin (whose size is a flux difference ratio of 0.0025) that is normalized to unity at a flux difference of zero. The main point of this figure is to demonstrate that typical photometric uncertainties in the HSC are better than 0.10 magnitude for a majority of the data.
We now turn to comparisions with other studies. The case shown below is a comparison between the HSC and the Brown et al. (2009) deep ACS/WFC observations of the outer disk of M31 (proposal = 10265). The observing plan for this proposal resulted in 60 separate one-orbit visits (not typical of most HST observations), hence provide an excellent opportunity for determining the internal uncertainties by examining repeat measurements. In the range of overlap, the agreement is quite good, with zeropoint differences less than 0.02 magnitudes (after corrections from ABMAG to STMAG and from aperture to total magnitudes) and mean values of the scatter around 0.05 mag. However, the Brown study goes roughly 3 magnitudes deeper, since they work with an image made by combining all 60 visits. More details are available in HSC Use Case #1, and in Whitmore et al. (2015).
For our final photometric quality example we compare the HSC with
ground-based observations from the Sloan Digital Sky Survey (SDSS)
observations of galaxies in the Hubble Deep Field. Using MagAuto
(extended object photometry) values in this case rather than MagAper2
(aperture magnitudes), we find
generally good agreement with SDSS measurements. The scatter is typically a few
tenths of a magnitude; the offsets are roughly the same and reflect
the differences in photometric systems, since no transformations have
been made for these comparions. The best comparison is between
A_F814W and SDSS-i. This reflects the fact that these two
photometric systems are very similar, hence the transformation is
nearly 1-to-1 .
This plot below shows (in blue) the distribution of the relative astrometric errors in the HSC corrected astrometry, as measured by the positional offsets of the sources contributing to multi-visit matches from their match positions. The units for the x-axis are milli-arcsec (mas). The y-axis is the number of sources per bin that is 0.1 mas in width. Plotted in orange is the corresponding distributions of astrometric errors based on the original HST image astrometry. The peak (mode) of the HSC corrected astrometric error distribution is 2.3 mas, while the median offset is 8.0 mas. The original (current HST) astrometry has corresponding values of 9.3 mas and 68 mas, respectively.
The following figure shows the corrected
astrometric offset distribution for the different instruments.
for multi-visit matches in the HSC for Version 1.0.
As expected, the instruments with smaller pixels (ACS [50 mas] and WFC3/UVIS [40 mas]) show the best astrometric accuracy, with a peak less than a few mas and typical values less than 10 mas. On the other hand, the instruments with larger pixels WFPC2 [100 mas on the WFC chips which dominate the statistics] and WFC3/IR [130 mas], have much larger astrometric uncertainties, with less than 10 mas peaks and typical values that are less than 20 mas.
Yes. There are two videos available:
Using the Discovery Portal to search for Variable Objects in the HSC
Using CasJobs with the HSC
There is also a Hubble Hangout that features the HSC.
Yes. We have a variety of Use Cases:
HSC Use Case #1 - Using the Discovery Portal to Query the HSC - (Stellar Photometry in M31 - Brown et al. 2009)
HSC Use Case #2 - Using CASJOBS to Query the HSC - (Globular Clusters in M87 and a Color Magnitude Diagram for the SMC)
HSC Use Case #3 - Using the Discovery Portal to search for Variable Objects in the HSC - (Time Variability in the dwarf irregular galaxy IC 1613)
HSC Use Case #4 - Using the Discovery Portal to perform cross-matching between an input catalog and the HSC - (Search for the Supernova 2005cs progentior in the galaxy M51)
HSC Use Case #5 - Using the Discovery Portal and CasJobs to search for Outlier Objects in the HSC - (White dwarfs in the Globular Cluster M4)
HSC Use Case #6 - Using the Discovery Portal to study the Red Sequence in a Galaxy Cluster - (The Red Sequence in the Galaxy Cluster Abell 2390)
Yes. HSC Facebook Page
Yes. Article titled HSC Version 1 of the Hubble Source Catalog Whitmore et al. (2015) was submitted to the Astronomical Journal.
There is an updated description of the Beta version of the HSC, and the matching algorithms used in Version 1 in Budavari & Lubow (2012).
Send a note to firstname.lastname@example.org. Please include enough information (e.g., a screen save of the problem) to make it possible to diagnose any problems.