clustering with HazelcastStorageService

Cantor, Scott cantor.2 at osu.edu
Thu Apr 28 20:01:48 EDT 2016


> Ah, when I define the bean in the xml that is actually instantiating an instance
> of the class?

Yes, eventually anyway. A bean is just an object instance. You name it, and then you point other beans at it and Spring injects one into the other automatically. Somewhere there's a ReplayCache bean that's been given a reference to one of the StorageService beans. That's all there is to it.

> Sorry again, spring/xml is not really my forte. I've got a pretty
> good grasp of the underlying theory and java itself but enterprise JavaBeans
> not so much.

This actually isn't EJB, or there would be ten other server processes all running to tell you that the one doing actual work was up.

> I wasn't saying not to have a two-part key. Currently the context is just
> "<randomid>. I meant making it something like "session-<randomid>" which
> would allow the backend storage engine to know what it was dealing with
> and be able to put all of the session context data into a single session map
> rather than creating separate maps for every individual session.

I suppose one could implement some kind of regular expression logic to classify records on that basis. Just wasn't something that ever came up.

> Earlier you said "The context/key split is just a two part key to allow for co-
> habitation within one storage instance by different storage clients". I would
> consider the replay cache a storage client, the session cache a storage client
> and the CAS ticket cache a storage client. It seems that by having each
> individual session have its own distinct random context, you are treating each
> individual session as a separate storage client?

I didn't actually say a context == a client, I just said the context metaphor allow for multiple clients. There's nothing in the API definition that allows an implementation to do that kind of thing, it would be a new constraint.

> I took a quick look through the CAS code, and I'm not completely sure, but I
> think it is using the name of the service as the context when it is creating a
> session. So it is not completely random, but it would still be creating multiple
> maps with the hazelcast backend depending on how many different CAS
> services you had configured.

SAML has the same approach. It relies on back-pointer records for logout that will end up with contexts like that, IIRC.

It just wasn't a goal of the API to do what you're suggesting, so the clients weren't coded to avoid anything but strict overlap.

> So it would either need to have a context prefix
> added as well so the backend could recognize and collate all of those contacts
> into the same backend map or the possibility of instantiating multiple
> instances of the backend for each use case as you suggested would need to
> be investigated. I'm still just trying to understand everything :).

Multiple instances of storage services are certainly supported, that's why the properties are all separate to allow for different storage plugins in each use case.

If that's a useful trick to optimize this particular implementation so it can apply some sort of setting globally, that is likely the best way to accomodate it.

-- Scott



More information about the users mailing list