*Operations IG Session #1: Validator Showcase *Thursday 27 May 2021, 22:00-23:00 UTC Participant count: 49 Aim: An introduction to validation of VO services, especially aimed at advertising to service operators how they can validate their services, and encouraging them to do that to improve standards compliance and user experience. *Schedule Mark Taylor - Validation Introduction Renaud Savalle - VO-Paris Validators Pierre Fernique - MOC validator Christophe Arviset - ESA-VO validators Mark Taylor - Taplint Mark Taylor - Software Identification Note *Notes *Mark Taylor - Validation Introduction Context: service compliance - many services operating - not all operate perfectly - causing problems for consumers Validation - what and why, how - good to validate your own services - validator results can be a bit mixed - iterative process - may require multiple passes - encourage to get in touch with those operating the service in cases of questions about results - "clean" validator output doesn't guarantee no issues - validators aren't always checking every detail *Renaud Savalle - VO-Paris Validators New name for VO-Paris is Paris Astronomical Data Centre (PADC) URL's for VO-Paris validator services http://voparis-validator.obspm.fr/ https://voparis-validation-reports.obspm.fr/ Validations run every night for ~25,000 registered services Tools at links above provide user view to validator results Drill-down features allow select by service, protocol, etc. sort results. download results table. "error frequency" tallies global stats on servic compliance, categories of warnings and errors can also investigate errors by site, see summary by type of error To see results of validators for a service, look for "errors" button at top right to drill into error details. Alberto Micol: Is Tap-1.1 supported by the VO-Paris service? A.: no, for that refer to Taplint, Mark to discuss later MT: from version 3.2, taplint does some TAP 1.1-specific testing Tom Donaldson - for the standalone service which allows user to enter params, which of those params are being used in the automated validator? A.: no, the idea was to use the examples provided by services in the registry. params used are visible in the results table ("params" column to the right) so you can go back and check service with params used by the validator Q: where do the parameters for automated testing (e.g. sky positions for positional searches) come from? RS: hard-coded values, the same for all services are used Theresa Dower: Renaud, if there is time and interest in getting the validator to use example queries from res_detail, we have already done some work with the NAVO validator on that, I'd be happy to help. (You are very correct that it is more difficult than it should be to find that information!) RS: thanks for the info ! Yes I would appreciate your help for doing that, that would be quite useful. Can you please point me to what you have done ? Theresa: I have worked with Michael Preciado on it a bit, I will look through our notes and follow up! *Pierre Fernique - MOC validator MOC standard update doc ready for RFC 2 separate reference implementations - MOCpy, JavaMoc [?] validator checks FITS, ASCII serializations - also JSON (not IVOA standard) MocLint.jar validator can be used locally, or URL via MocServer at http://alasky4.u-strasbg.fr:8080/MocServer/lint Mark Taylor: is there a repo of earlier validator versions? A: Last Moc Validator version checks all standards (current and previous). it detects which version of the standard is concerned, and applies the associated check rules. Pat Dowler: there is an issue with standards that require e.g. "datatype='char' arraysize='*'", when some fixed arraysize would also make sense. Validators flag errors though clients are probably fine with it. It's fiddly to write standards that get these requirements right. MT: Agree, this has cropped up in several places, also similar things like requiring "double" when "float" would be OK. Suggest some centralised resolution for this, new text in DALI?? *Christophe Arviset - ESA-VO validators http://registry.euro-vo.org Since release a week ago, now using the latest version of TAPlint, so now checking TAP1.0&1.1 Trying to check all aspects of the standard. standalone validator gives detailed compliance report. Tamara Civera: can you specify which query the resource should be tested against? A.: yes, uses the examples from the registry Tom Donaldson: is anyone planning on working on validators for DataLink? following links given in other query results A.: no immediate plan Mark Taylor: there is some validation code in STILTS, but it's checking formats, not semantics for DataLink TD: it's nontrivial, wondering if anyone was working on it. *Mark Taylor - Taplint taplint is used by other validation services, but also usable as a standalone tool point at a TAP URL, it runs tests and tells you what's not compliant Reports Error, Warning, for incorrect/suspicious behavior, Info messages to show progress Output messages are structured to be grep-friendly lots of information/details in the output to help track it down tests services in several stages checks ADQL via sync / GET&POST, and async makes sure the metadata returned matches the metadata expected (from registry) tests all columns in all tables does basic ADQL testing only, not trying to validate whether the ADQL implementation is correct. Download: stable http://www.starlink.ac.uk/stilts manual available, but if output needs explanation, Mark happy to answer questions *Mark Taylor - Software Identification Note SoftID note - discussed in last few Interops since 2018 Latex: https://github.com/msdemlei/softid formatted PDF: http://docs.g-vo.org/softid.pdf General Idea: VO client software should use HTTP headers to identify itself where possible. User-Agent header in HTTP requests, Server header in responses This can be helpful in checking why services may not be behaving as expected, commonality in error reports also for reporting usage statistics and understand what tools are getting used, understand user behavior Also introduced IVOA-* comment token as a way for client to indicate that the usage is being done as a diagnostic, validation, benchmarking, etc as opposed to normal science usage. Helps service operators filter out those from usage statistics. next steps: publish note to http://www.ivoa.net/documents