looking for thoughts on IDP deploy architectures

David Gersic dgersic at niu.edu
Mon Feb 18 15:31:17 EST 2013

>>> On 2/15/2013 at 01:12 PM, Steven Carmody <steven_carmody at brown.edu> wrote: 

> high throughput -- traditionally, sites have run clustered IDPs, using 
> various approaches to clustering (terracotta, jboss, stateless IDPs, 
> etc). A newer approach is virtualization, allowing a site to dynamically 
> expand and contract the "size" of the machine running an IDP). Is anyone 
> doing that ?

I'm not. A single IdP here seems to have plenty of performance to handle the loads we're throwing at it, at least so far.  Then again, we're relatively new to Shibboleth, and the loads are therefor not probably as high as some other places may see.

> high availability, failover -- traditionally, sites have run multiple 
> IDPs behind a load balancer. If an IDP encounters problems, or is 
> undergoing maintenance, it is removed from the pool. Are sites using 
> other approaches to this requirement ?

For HA, I'm using Linux with Heartbeat. I have a five node (physical) cluster, with various services running on it, one of which is the IdP. If the node fails or needs to be taken down for maintenance or whatever, the IdP moves to another node. If the IdP fails, it is restarted. If restarting the IdP doesn't help, it is moved to another node and started there. There is loss of in-memory data if the IdP restarts, but that can't be avoided with this design.

> High availability could also extend to services that the IDP may rely 
> on. Authentication (perhaps kerberos) and attribute stores (perhaps 
> ldap) are obvious examples, and are easy to also run with the "multiple 
> server" approach.

Yep. The resources (LDAP servers) that the IdP needs are also clustered resources, on the same cluster.

Next up, in the planning stages now, is making multiple physical sites available to these clustered resources, so that if we lose a machine room, or a campus, that the services will continue to run from an alternate machine room.

More information about the users mailing list