clustering with HazelcastStorageService

Cantor, Scott cantor.2 at osu.edu
Thu Apr 28 17:35:04 EDT 2016


On 4/28/16, 5:12 PM, "Paul B. Henson" <henson at cpp.edu> wrote:



> Right now I am using the HazelcastStorageService for the idp.session.StorageService, the idp.replayCache.StorageService, and the idp.cas.StorageService. It's not clear to me whether each of those usages shares a single instance of the back end object, or each of them creates a separate instance of the object. I was poking through the code and saw a piece that said  "new StorageBackedIdPSession" in the session handling code and perhaps misinterpreted it.

Yes, if you'd said that I would have been less confused. That's not a storage service instance, that's just an object that wraps the session and is backed by the storage service through a lot of class incest I won't go into. Nothing to do with what I was talking about.

>Based on this, it sounds like there is one instance of the storage engine that is shared by everything in the idp that needs storage that is configured to use that type of storage engine?

The configuration is to a specific storage service bean, it isn't based on any inherent limitations on the number of beans or their type. You could tell it to use one bean for different uses, or separate beans, and the beans could be of one type or separate types.

>If the session context had a static name, I would be able to configure it to have two nodes of redundancy if I wanted to, and be able to lose two nodes of the cluster without losing any data. There's also the question of efficiency; as I understand it now, each and every session, given it uses a random context name, is creating a new Hazelcast map, which I believe is much less efficient and involves considerably more overhead than if all sessions shared a single map.

Ok. Then that is in fact true, but is not something I knew anything about given that the storage API doesn't expose that level of detail. At the API level, the contract is just to support a two part key, but what it does with them is implementation specific.

> That is of course an implementation detail specific to the hazelcast storage backend, but to resolve it might be easier with a minor change in how the session context are named.

It's not minor in that there is no way to use a fixed context, if that's what you're asking for. The session cache storage model is extremely complex and the use of the two part key is inherent.

>Is the session cache the only one that uses random data?

It may be at present, but that isn't an assumption you could make in general. If you know exactly which storage clients are used of course, then you can assume things.

> If so, a slightly kludgy heuristic could be that if the context name is random, it belongs to the session map.

Well, I think in the context of a plugin, it's just more that it might assume any of them are in one context, whatever they actually are.

But you can't avoid a two part key. The system won't perform adequately if the operations based on it aren't efficient. Glomming the two halves together just isn't workable when you can't do the lookups efficiently.

> When you have the time, I would appreciate it if you could confirm the context names for the idp.replayCache.StorageService and the idp.cas.StorageService, if those are static I could at least test configuring the hazelcast maps for those.

I don't know what the CAS one is, but I think the replay cache at the moment is probably using "org.opensaml.saml.common.binding.security.impl.MessageReplaySecurityHandler"

My general feeling is that it's not practical to document them. That one alone is an implementation class, so it's not stable. This simply isn't part of the allowable contract for an implementation of the storage API.

I think what you want to do here is actually create separate instances of the storage service bean and use them separately for different use cases. Seems to me that's the right level of separation for this if you need to configure these sorts of settings differently.

-- Scott



More information about the users mailing list