|
MIVOT Session |
Lead by Laurent Michel |
DM workflow |
Session open for discussion on a roadmap leading to concrete implementation using models. The talk agenda is not finalized yet but the following topics will be presented
- Reminder of the WG strategy:
- Historic reminder
- Building component models (MCT/PhotDM)
- Design of an annotation syntax (MIVOT) which is model-agnostic, which works with any data arrangement and which is isolated into the VOTable
- Design of models aggregating components to annotated archival data (MANGO) or science products (Cube)
- Plan for MIVOT implementations
- Cookbook
- Client side: Astropy/PyVO
- Server side: VOLLT extension
- Mango draft:
- MANGO is an open model designed to improve the description of archival data by providing mechanisms to connect various columns together for building complex entities and to enhance mapped quantities with some semantic, coordinate systems and coverage information.
|
|
70' |
The splinter has no agenda, this is a session open for open discussion (see the Wiki page).
Splinter - Tuesday, May 09, 18:00 -- 19:30, Plenary Room |
Supervisor |
Title |
Abstract |
Materials |
Time |
Mark Cresitello Dittmar |
DatasetDM, Provenance, CAOM and Characterization |
Now that the core models of the Cube family (Meas/Coords) are completed, we return our attention to the next group. The information in this model is a consolidation of content from several of the early core models (ObsCore, Spectrum, Characterization). The descriptions of many of the elements are derived from the Resource Metadata standard. The idea of this model is to centralize this information for other datamodels to use (Cube, Mango, Spectrum2, TimeSeries). |
wiki |
|
DM2 session contains miscellaneous DM related talks.
DM II - Thursday, May 11, 11:00--12:30, Plenary Room |
Speaker |
Title |
Abstract |
Materials |
Time |
Steve Hugues |
The PDS4 Information Model - An Implementation-Agnostic Model for Interoperability |
The PDS4 Information Model was developed by the Planetary Data System as a science data archive standard to improve interoperability within the planetary science community. It addresses the key requirements for interoperability including standardized data formats, common data models, clear data definitions, and well-defined data governance. In addition, the information model was developed independent of all system implementation choices to insulate it against inevitable changes in implementation technology. It also uses multi-level data governance to localize the impact of changes in the science disciplines. These architectural choices have allowed the PDS4 Information Model to remain relevant within the Planetary Science Community while enabling interoperability across diverse science disciplines, tools, and APIs. It has been adopted world-wide by space agencies involved in Planetary Science.
This talk will briefly describe the architectural and design principles used to maintain the independence of the PDS4 Information Model and how the artifacts necessary for the maintenance and operations of the PDS are generated. |
|
17+3 |
Mark Cresitello Dittmar |
Model status |
This presentation will have 2 components: - Spectrum 1.2: Update on the enhancement request for the Spectrum data model and its readiness for RFC
- With Measurements and Coordinate models in REC, we turn our attention to the remaining models required for representing NDCube data.
This portion of the presentation will review the remaining models, with a focus on Dataset and Transform; their current state, open issues and implementation status. |
|
17 +3 |
Mathieu Servillat |
CTAO Data Model group |
|
|
7 + 3 |
Mathieu Servillat |
DM for High Energy astrophysics |
|
|
7 + 3 |
Mathieu Servillat |
One step Provenance |
We propose a simplified structure to describe the provenance of an entity as a succession of steps, based on the IVOA Provenance Data Model. With such a structure, the "last-step" provenance of an entity may be stored as a flat list of attributes inside the entity (e.g. with keywords in the header of a FITS file), as a separate file or in a relational database. By iterating the request of provenance step by step, one can reconstruct the full provenance of an entity. |
|
12 +3 |
|