Recommended clustering configuration

Brian Mathis brian.mathis at
Thu May 12 16:04:46 EDT 2016

I've read the docs on the wiki about clustering and storage, but there
are some things that I'm not clear on.  They go over different ways
one *might* set it up, but the specifics of what's recommended is not
spelled out.  I'm interested to know what is the specific
recommendation of the developers/those with experience and what is
working for them, instead of generalities about the many different
options one could use.

How I generally approach clustering for a web app is to have multiple
nodes running an application, with nodes behind an haproxy load
balancer that handles status checking and sticky sessions to ensure
users go back to the same node.  The database (LDAP in this case) is
somewhere else and has its own clustering, so not a concern here.
This provides a resilient setup that is still relatively simple.

It seems that Shibboleth IdP would work in this kind of setup, but I'm
still unclear on some things.  What I already understand:
- Ensure that configs are the same on all nodes
- Ensure then encryption secret is the same on all nodes
- Data storage on client side (cookie/html5 local storage) removes the
need for shared storage of some data on nodes

What I still have questions about:
- What kind of storage on the node is required, if any?  The
StorageConfig mentions databases and memcached, but it's not clear on
if these are really needed for clustering.
- Clustering wiki mentions that the IdP defaults are "easily
clusterable", but the nodes have replay cache and artifacts stored in
memory, so what is the impact if a node goes down?  Failed queries?
Or cache misses resulting in longer response time for a request?  Is
this type of failure a big issue that could cause problems?  Or would
a user "just" need to relogin on the new node?

I'm not a java developer and I feel that the docs sometimes assume a
detailed level of java framework knowledge, so any help in
understanding this at an administrator level is appreciated.

Thank you,
~ Brian

More information about the users mailing list