GWS WG discussion @ November Interop 2021

GWS session 1

Dave Morris: ExecutionPlanner Service Interface

Today there exists a large variety of Science Platforms, they serve different communities, they have different configurations and different authentication methods. How can we make them interoperable?

There exists notebook based platform and platforms that execute containers. Both are based on a single file that defines the task, evenif the "content of the file" is different (task specific), the pattern is very similar.

The reality however is not so simple, there are a lot of different services that are not defined by a single file: reality is messy.

The Idea is to describe tasks in terms of what kind of service (eg. docker) and the amount of resources (cpus, memory etc) a user need.

we are working on two notes:

  • Execution Planner
  • UWS with container support
The combination of the two allows to schedule containers using different methods, like helm, kubernetes, docker-compose, etc. into a specific platform able to satisfy the resource requirements.

Christine Banek: The reality is really complex, there are different specification designed to do different things that iteract one woth the other (e.g. like Kubernetes uses helm and helm uses docker, ans so on). I am worried that trying to unify this as one abstraction layer will be tricky at best, and since these are all moving, might be hard to keep up with.
Dave Morris: yes, this is the problem we are trying to solve (it is hard to describe all the complexity in an 10min presentation). The execution planner only acts as a discovery service, it answers the question "can I do this", and hands the client the information it needs to use the actual service.

GT: we could dedicate a virtual splinter in the next days to brainstorm on this idea

Carlo Zwolf: Should the execution framework have to say how to execute the container? Shouldn't the implementer do it themselves "under the hood" and just run it? This may make the configuration even more limited to try to make a unified configuration for all these different specs, but it is a good point that in the end you just want to execute it and get the result. If the caller has to worry about the way it is done, it will be less interoperable between data centers if they don't support the same execution frameworks. I would say that the client is not interested in out the service is implemented under the hood but is more focused on the protocols to interact with the services.
DM - you are right. It is hard to choose names for the interfaces in the presentation that people will recognise.

Stefano Alberto Russo: Rosetta science platform

It is a conteiner centric microservices based science platform that allow users to execute tasks on different platforms including HPC clusters. Based on a set of architectural elements: files, computing resources, tasks, comtainers, AAI.

In practice it is a way to allow users to run containers of their choosing to host their containers for tasks. Similar to execution planner but a lot more simple in terms of the scope.

GT: the platform architecture identifies a set of elements that corresponds to services and standards that IVOA already has but shoud be updated to recent tecnologies (as containers). This si in line with what Dave is doing on extending UWS.

Brian Major: GMS RFC.

Brian is presenting the GMS and he is going through the currently open RFC issues.

GMS (group membership service) is an API that answers questions about whether a user is a member of a group or which groups they are a member of. GMS is supporting interaction between services; a user calling GMS directly isn't really useful (you can find out your group information) but if you have a TAP service using GMS for authorization decisions then it does become useful bacase it implements access control to data.

There's the RFC page where comments and can be submitted or github issues and pull requests.

We discuss the various issues and comments from github and wiki.

  1. It should be stated that GMS should have high availability because it is a crytical service called by many others (e.g. TAP, VOSpace etc.) in different contests.
    Yes, availability is not usually part of a standard but we can add a implementation "best practice" at the end of the actual document standard. However, we should recommend any solutions for solving availability problem.
  2. GMS is a high transactional service, you could be doing many registry lookups per second which could affect the registry availability. Perhaps the way to solve that is caching.
    Caching is tricky with security. Maybe we should say how long the response is valid for (Is it already done this way?) GMS issue 12
  3. we need to register IA2 GMS into the registry.
  4. Issue raised by Marcus regarding the use of standard ID.
BM suggests a sort of "implementation recommendations" at the end of the document with a few sentences on different things that we have discussed during the session.

GWS Session 2

Nicola Calabria: IA2 VOSpace update.

INAF VOSpace Update. It implements VOSpace standard and it adds the integration with a tape in the workflow for the user: a specific tranfer service is added to manage upload and download of files.
There is a general overview and components with some specific implementation: e.g.multiple nodes feature.
The Auth and Authz is based on RAP and GMS. The GMS communication is based on (delegated?) tokens.

Brian Major> How about the experience to have nodes in tar files?
Nic> This feature is under discussion now, the main problem is how much recursione levels includes.

Francois B.> Can we compare VOSpace with rucio group implementation.

Sara B.> there is an on going work that involves Sara B. and Dave Morris about Rucio and VOSpace integration/implementation in the framework of ESCAPE project.

Sara Bertocco: SSO discussion towards a new SSO standard

There is an on-going discussion on SSO lasting in the last couple of years. The basic idea is that we need to update the actual standard in two directions: update with new methods and implement a new better (non browser) client -- server challenge.
We need to improve/implement:
- SecutiryMethod: upgrade it and clarify the content.
- Authentication discovery to allow non--browser clients to easly use auth
- Authentication endpoints (from capabilities or from HTTP challenge)

Mark Taylor: SSO for non-browser clients
How can a (non-broser) cliend find out how to autheticate and where to authenticate ?
Mark report the work done with CADC for an implementation based on TAP.

In the actual proposal, the server communicate auth methods based on http challenge and security methods;

Two examples are detailed:

- Bearer token including some open questions (e.g. scope of the token) to discuss in the future;

- cookie mechanism.

A proposed metho for "challenge" is detailed.


CHB: Boothstrap challenge? make a sync request. Other end points could return different things. We can try to do something and then immediately just falling back on the challenge

Mark : One problem with that is if the thing fails, because it could fail for various different reasons. A bad request on the table, requesting something is not there. To pick what went wrong gets problematic. It would be much nicer to have something where the only thing that's going to go wrong is not an issue.

CHB: If you got a 401 or something like that and it had a WWW-authenticate, then you'd probably be able to figure it out. Consider also DataLink and other similar things pointing to URLs outside of your service.

Pat: TAP example is the best to test different solutions. Every endpoint should provide the methods and this may go in the capabilites in particular because you can endup with situations in which you can start as anonymous and then access an authenticated part,

CHB: There will be one endpoint per service.


Pat & Mark:probablyis a good optionto change the standard of VOSI and TAP to include the use of VOSI for the bootstrap mechanism

Pat : VOSI capabilities was stated to be anonymous so that you could go and find out how to authenticate and if you needed. If we retire the security methods, the anonymous requirement disappear

CHB: We just have to figure out the challenge and sort of place to do the log-in end token return

Slide 6 of Mark presentation -

Pat: We need a couple of challegens to tell the client the kind of credential, we are looking for a flexibe logging APIs. The disagreement is whether we couple the kind of credential you're going to get and the API of the log-in together into that challenge token or if we keep them separate by having basically three separate pieces of information.

The challenge itself saying what kind of token you're going to get and then the standard ID for the logging API. We have to decide if we want the flexibility of being able to specify log-in APIs to say kind of credentials you get back. Example: in our credential delegation service, we have an API where you can retrieve a proxy certificate that will work at CADC and CANFAR. And so that would be a IVOA certificate, the challenge. And then we could put a standard ID and access URL that describes the end point that will give you a certificate and you would use it if you knew how to use this client certificate and you would ignore it you didn't. Does that make sense? Sort of some combo there.

CHB: how can I decode the token in the challenge and find you the scope of the token?

Mark: for cookies it is not a problem, but for barrel token the scope is not standard

CHB: Token is base 64 encoded? It'just an opaque token? Would we have to pass scopes into the token? Or would we assume that the log-in presented the scopes that you need to access the service?

The scopes have to be a standard?

Mark: The relevant RFC has got all the scoping information in a cookie, so you get a response back and then you behave just like a browser because the cookie has got information in it about where you can use that. But you don't have that for the Barer token. So that would require extra standardization.

CHB: At Rubin, we have a scope that's like "read TAP" or "execute notebook" and stuff like that. "read, tap" would be in the WWW-authenticate header when I try to hit the tap service, or we can leave scopes out of here completely.

GT: About the discussion about the scope of the token. In practice, there is not a consensus because it is not a standard way. Are you encoding or are you not encoding? how are you describing this scope? So is there something that you have to define inside the IVOA?

CHB: I guess that's what I'm wondering right now. Do we have to define it or we pass it through in the service? Should it be kind of an opaque thing or a pass through? We should also make sure that we're all talking about scopes in the same way. Are something like claims, or something like URL domain scope?

GT: What I have in mind is that you are presenting the challenge with the token and this token is just valid to access the VOSpace, for example, and not all the other things. Is it right?

CHB: At Rubin, we can do either. So you can make a limited access token that just does a very specific thing. Or you can use kind of your skeleton key token that does everything. And this gets into delegation of tokens and similar things. How to get the scopes out of JWT token or the cookie?

Tom: A token allow to do certain things and not others. with respect a scope that specify which service to access, the various service are finding if the token is valid to access them, it is very complex to programmatically manage that. On GitHub: you go and you create your token there for various programmatic access.

CHB: At Rubin there's a site where you log-in and you can select your scopes. But how to combine the need to retrieve the token of a client (command line) like TopCat with this kind of method of finding tokens opening a browser window?

Brian: They are opaque tokens. We plan on removing the base64: prefix because the colon is not allowed in that header field.

CHB : But then you still don't know what the scopes are because you're not being able to decode the token.

Markus: Scopes of the token is from user side, is it something that I have to take care at the challeng level? I

Pat: is there a token for each service access? At CADC and CANFAR one token is used to access all the serivces. It actually spans two internet domains. We would be nice if we could tell clients that they have to get separate cookies, but not necessarily tokens. We must update the CDP. Reduced scope token can be used for the CDP.

Pat : credential delegation protocol, which currently lets you create a proxy certificate at a data center so the data center can do things with your identity. There were ideas about putting things in the proxy certificate to limit what you're allowed to do, but probably anyone's done that.

Tom: I see scoping as a way to limit what access you have. Thinking about git, you can mint the token that lets you maybe just read the contents of a repository versus being able to push or delete the repository. Right? It' not matter of how authentication gets done. It's just a question of telling a client he needs to get the token before to be able to do something programmatically.

Dave: is there a way to communicate betewen server and client that you nee a different token to access a service?

Is there a difference between you need to go get a token and you need to go get a better token? If I already have a token that has the scope to "read access", but I try to "write" something. Is there a way to communicate to the client he needs to go back and get a better token?

Pat: permission deny is the approch we are using. With permission denied it is never easy to figure out why I'm not allowed to do something

Brian: That's how GitHub works, it doesn't give you any clues to why your token isn't working.

CHB: I was wondering if it could give you a www-authenticate challenge and say you need to get a new token or you need to get something with different scopes. The scope may be in the challenge?

Mark: How we're going to progress on this?

At the moment, what I've been describing is essentially emails and incremental bits of implementation between me and CADC with Markus kind of ducking in on the messages. Is it a suitable way to proceed? One possibility: We're progressing towards getting something working and then, once we've got a client server interaction working, publicize that to the group.
Other way: should we be doing that in public or involve other people as we go along?
When we've reached a decision doesn't necessarily mean that that's what everyone else has to accept? Is it better to work to have something working, so that's easier for other people to comment?
Or would we have a big, complicated discussion to get there?
Have other people got opinions on that?

GT: Propose bi-monthly telco to synchronize and fix the idea on the document(s). Probably it could be usefull to define an implementation note

CHB: There are a lot of technical points to clarify. Probably it's better to go on with prototyping and testing before writing.

Pat: Brian and me are going to do one more round of changing our prototyping implementation and see what Mark's reaction to that is. Then we probably could write up.
For the first version we can just restrict ourselves to the concept of single sign on where you're logging and where you should use the token rather than what it can do.

CHB: the little group continue in work and then write e-mails to the GWS group. Keep Sara in to the loop.

GT: update of credential delegation in terms of tokens. Try to keep to organize the update of the documents.


They are opaque tokens. We plan on removing the base64: prefix because the colon is not allowed in that header field.
James Tocknell
How would a client-side JS login system work with www-auth (e.g. a system where the login-form-system is implemented in react or similar framework) with non-browser clients (or should we require that system do not require client-side JS)? Things like 2FA are moving to requiring a full browser implementation.
Yeah I agree James, and we’re using 2FA and require browser too, and I think like I said topcat might have to make browser windows and stuff like that
Brant Miszalski
+1 for making authenticated programmatic access as simple as possible. It’s very helpful to minimise the number of hoops to jump through.
Markus Demleitner: Well, we shouldn't forget that people might want to do authenticated operations on headless systems...
James Tocknell:
Totally, calling out that whatever auth system used should allow usage on headless systems might be a good idea
Topic revision: r4 - 2021-11-10 - SaraBertocco
This site is powered by the TWiki collaboration platformCopyright © 2008-2021 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback