August 1 2001 minutes

We reviewed the steps that led to our inital list and talked about how we ranked things.  It was mentioned again that this should be a two step process of ranking from a scientific usefulness point of view and a feasability point of view and that the two should in the end be folded together to get the most funcitonality up front rather than getting bogged down in one or two very difficult but high payoff projects.

Prior to this analysis we talked about the possibility of making this "portal" concept extend beyond the browser and into the data reduction/analyisis software.  It would be interesting to have data and services be truly distributed but controlled from say IRAF or PYRAF so a user doesnt have to have the data locally (or even the services local) to manipulate the data.  This is very NVO oriented but it would be an interesting concept to play with now to see if it is a good thing to focus on in the future or not.

We then started talking about implemenations of the number ones in the lists.

  1. Portal
    1. Should start with a yahoo like hyper categorizaion of data services out there so one can browse though a tree like structure to find services.  This hshould be done by a human to add that value.
    2. These things should be searchable by keyword
    3. Information maps would be nice but not done until after 1 and 2
    4. Object based searches would also be good (this kind of verges on all data from one source) as they are done with ned (you can get all known photometry for a source)
    5. The portal should be able to provide a site independant locator for a catalog so others can reference this dataset without being machine dependant (say a url of astrocatalog:/YaleBSC2000&id=0433&vmag would point to the vmagnetude of BSC 0443 and would also serve as a pointer to that stars entry or the entire catalog depending on how detailed the tag is)
  2. Data flow from results of one search to the query in another
    1. The idea of marking certain fields and filling out others seemed to work.  This would be done with some clever javascript browser plugin that would allow the user to highlight the data to be used and then hold on to that in sort of a super clipboard (where the different entiries are separarted by column or row) and then when a page has input fields that can be fileld out, the user could identify which data should be pasted in which querie fields.
    2. It was mentioned that we could start by adding smart algorithms to identify coordinates and work with those.  
    3. The idea of precession was mentioned as well in that coordinates may need to be converted before a query can be done.
    4. The idea of input and/or output being data files was also raised.  For example one might want to use one image to register the coordiante system of another, so the query to the first archive might produce an image which is fed to the second to do the coordinate registration
  3. Insturment independat data quality estimates
    1. This may be being done by SHARE but is understood by FASST as being numbers in the science catalog estimating limiting fluxes or the like
  4. External catalog/data ingest and distribution
    1. Should be simple for data provider to use
    2. Might need some humans behind the scenes creating the meta data
    3. No inital need for review process but one could be implemented if need be.
At the next meeting we decided we should review what we expect to have for the white paper and start moving towards that end. We should also start to look at the feasability of all of this and start a list of doable projects as well as a list of things we would like to see in the future once the NVO comes to be.