Difference: IVOA_Nov3_GWS_etherpad (1 vs. 2)

Revision 22021-11-03 - GiulianoTaffoni

 
META TOPICPARENT name="InterOpNov2021GWS"

GWS WG discussion @ November Interop 2021

GWS session 1

Changed:
<
<
Dave Morris: ExecutionPlanner Service Interface
>
>
Dave Morris: ExecutionPlanner Service Interface
 
Changed:
<
<
Today there exists a large variety of type of tasks with different cofigurations.
>
>
Today there exists a large variety of Science Platforms, they serve different communities, they have different configurations and different authentication methods. How can we make them interoperable?
 
Changed:
<
<
Moreover, a lot of people are looking at containerization as way to execute tasks and a platform to execute containers.
>
>
There exists notebook based platform and platforms that execute containers. Both are based on a single file that defines the task, evenif the "content of the file" is different (task specific), the pattern is very similar.
 
Changed:
<
<
The picture is not as simple
>
>
The reality however is not so simple, there are a lot of different services that are not defined by a single file: reality is messy.
 
Changed:
<
<

 
Changed:
<
<

>
>
we are working on two notes:
Added:
>
>
  • Execution Planner
  • UWS with container support

The combination of the two allows to schedule containers using different methods, like helm, kubernetes, docker-compose, etc. into a specific platform able to satisfy the resource requirements.

Christine Banek: The reality is really complex, there are different specification designed to do different things that iteract one woth the other (e.g. like Kubernetes uses helm and helm uses docker, ans so on). I am worried that trying to unify this as one abstraction layer will be tricky at best, and since these are all moving, might be hard to keep up with.
Dave Morris: yes, this is the problem we are trying to solve (it is hard to describe all the complexity in an 10min presentation). The execution planner only acts as a discovery service, it answers the question "can I do this", and hands the client the information it needs to use the actual service.

GT: we could dedicate a virtual splinter in the next days to brainstorm on this idea

Carlo Zwolf: Should the execution framework have to say how to execute the container? Shouldn't the implementer do it themselves "under the hood" and just run it? This may make the configuration even more limited to try to make a unified configuration for all these different specs, but it is a good point that in the end you just want to execute it and get the result. If the caller has to worry about the way it is done, it will be less interoperable between data centers if they don't support the same execution frameworks. I would say that the client is not interested in out the service is implemented under the hood but is more focused on the protocols to interact with the services.
DM - you are right. It is hard to choose names for the interfaces in the presentation that people will recognise.

Stefano Alberto Russo: Rosetta science platform

It is a conteiner centric microservices based science platform that allow users to execute tasks on different platforms including HPC clusters. Based on a set of architectural elements: files, computing resources, tasks, comtainers, AAI.

In practice it is a way to allow users to run containers of their choosing to host their containers for tasks. Similar to execution planner but a lot more simple in terms of the scope.

GT: the platform architecture identifies a set of elements that corresponds to services and standards that IVOA already has but shoud be updated to recent tecnologies (as containers). This si in line with what Dave is doing on extending UWS.

Brian Major: GMS RFC.

Brian is presenting the GMS and he is going through the currently open RFC issues.

GMS (group membership service) is an API that answers questions about whether a user is a member of a group or which groups they are a member of. GMS is supporting interaction between services; a user calling GMS directly isn't really useful (you can find out your group information) but if you have a TAP service using GMS for authorization decisions then it does become useful bacase it implements access control to data.

There's the RFC page where comments and can be submitted or github issues and pull requests.

We discuss the various issues and comments from github and wiki.

  1. It should be stated that GMS should have high availability because it is a crytical service called by many others (e.g. TAP, VOSpace etc.) in different contests.
    Yes, availability is not usually part of a standard but we can add a implementation "best practice" at the end of the actual document standard. However, we should recommend any solutions for solving availability problem.
  2. GMS is a high transactional service, you could be doing many registry lookups per second which could affect the registry availability. Perhaps the way to solve that is caching.
    Caching is tricky with security. Maybe we should say how long the response is valid for (Is it already done this way?) GMS issue 12
  3. we need to register IA2 GMS into the registry.
  4. Issue raised by Marcus regarding the use of standard ID.

BM suggests a sort of "implementation recommendations" at the end of the document with a few sentences on different things that we have discussed during the session.


<!--
* Set ALLOWTOPICRENAME = TWikiAdminGroup
-->

 

Revision 12021-11-03 - GiulianoTaffoni

 
META TOPICPARENT name="InterOpNov2021GWS"

GWS WG discussion @ November Interop 2021

GWS session 1

Dave Morris: ExecutionPlanner Service Interface

Today there exists a large variety of type of tasks with different cofigurations.

Moreover, a lot of people are looking at containerization as way to execute tasks and a platform to execute containers.

The picture is not as simple


<--  
-->

 
This site is powered by the TWiki collaboration platform Powered by Perl This site is powered by the TWiki collaboration platformCopyright © 2008-2024 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback