August 1 2001 minutes
We reviewed the steps that led to our inital list and talked about how we
ranked things. It was mentioned again that this should be a two step
process of ranking from a scientific usefulness point of view and a feasability
point of view and that the two should in the end be folded together to get
the most funcitonality up front rather than getting bogged down in one or
two very difficult but high payoff projects.
Prior to this analysis we talked about the possibility of making this "portal"
concept extend beyond the browser and into the data reduction/analyisis software.
It would be interesting to have data and services be truly distributed
but controlled from say IRAF or PYRAF so a user doesnt have to have the data
locally (or even the services local) to manipulate the data. This is
very NVO oriented but it would be an interesting concept to play with now
to see if it is a good thing to focus on in the future or not.
We then started talking about implemenations of the number ones in the lists.
At the next meeting we decided we should review what we expect to have for
the white paper and start moving towards that end. We should also start to
look at the feasability of all of this and start a list of doable projects
as well as a list of things we would like to see in the future once the NVO
comes to be.
- Should start with a yahoo like hyper categorizaion of data services
out there so one can browse though a tree like structure to find services.
This hshould be done by a human to add that value.
- These things should be searchable by keyword
- Information maps would be nice but not done until after 1 and 2
- Object based searches would also be good (this kind of verges on
all data from one source) as they are done with ned (you can get all known
photometry for a source)
- The portal should be able to provide a site independant locator for
a catalog so others can reference this dataset without being machine dependant
(say a url of astrocatalog:/YaleBSC2000&id=0433&vmag would point
to the vmagnetude of BSC 0443 and would also serve as a pointer to that stars
entry or the entire catalog depending on how detailed the tag is)
- Data flow from results of one search to the query in another
- The idea of marking certain fields and filling out others seemed
that would allow the user to highlight the data to be used and then hold
on to that in sort of a super clipboard (where the different entiries are
separarted by column or row) and then when a page has input fields that can
be fileld out, the user could identify which data should be pasted in which
- It was mentioned that we could start by adding smart algorithms to
identify coordinates and work with those.
- The idea of precession was mentioned as well in that coordinates
may need to be converted before a query can be done.
- The idea of input and/or output being data files was also raised.
For example one might want to use one image to register the coordiante
system of another, so the query to the first archive might produce an image
which is fed to the second to do the coordinate registration
- Insturment independat data quality estimates
- This may be being done by SHARE but is understood by FASST as being
numbers in the science catalog estimating limiting fluxes or the like
- External catalog/data ingest and distribution
- Should be simple for data provider to use
- Might need some humans behind the scenes creating the meta data
- No inital need for review process but one could be implemented if