clustering with HazelcastStorageService

Paul B. Henson henson at cpp.edu
Wed Apr 27 17:08:03 EDT 2016


I was curious how many people are using Unicon's HazelcastStorageService? I finally finished configuring my standalone dev instance of idp 3 and added clustering to it yesterday. The HazelcastStorageService was pretty simple to set up, after compiling it and adding it to the WAR, I just updated the global.conf with a basic config:

    <hz:hazelcast id="hazelcast">
        <hz:config>
            <hz:properties>
                <hz:property name="hazelcast.logging.type">slf4j</hz:property>
            </hz:properties>
            <hz:network port="5701" port-auto-increment="false">
                <hz:join>
                    <hz:multicast enabled="false"/>
                    <hz:tcp-ip enabled="true">
                        <hz:members>__IDP_NODE1__, __IDP_NODE2__, __IDP_NODE3__</hz:members>
                    </hz:tcp-ip>
                </hz:join>
            </hz:network>
        </hz:config>
    </hz:hazelcast>

    <bean id="shibboleth.HazelcastStorageService"
          class="net.unicon.iam.shibboleth.storage.HazelcastMapBackedStorageService">
        <constructor-arg name="hazelcastInstance" ref="hazelcast" />
    </bean>

I run a three node cluster over TCP, with security provided by point-to-point IPsec tunnels at the underlying network level. I enabled it for the idp.session.StorageService, idp.replayCache.StorageService, and idp.cas.StorageService (I have consent and artifacts disabled).

It seems to be working fine, I did some testing and verified replication is occurring and everything looks good. I'm not really sure what's going on under the hood though. I also use Unicon's Hazelcast ticket registry on my current CAS servers, and for that I need to explicitly declare the map:

                        <hz:map name="tickets"
                                max-idle-seconds="${tgt.timeToKillInSeconds}"
                                max-size-policy="USED_HEAP_PERCENTAGE"
                                max-size="85"
                                eviction-policy="LRU"
                                eviction-percentage="10"/>

Which allows me to configure how much memory it's going to use, and what it does when it starts to exceed that amount of memory. This shibboleth version doesn't seem to require explicitly configuring maps. From a quick look at the code, I'm not sure if it creates one global map that is used by everything, or if each instance creates its own map. In either case, I'm not sure what settings end up getting used as far as how much memory each map is allowed to use or how one might change them, or what happens in the case of low memory pressure.

I'd be interested in any reports of performance in production, particularly under high load scenarios or memory stress.

Thanks much...

--
Paul B. Henson  |  (909) 979-6361  |  http://www.cpp.edu/~henson/
Operating Systems and Network Analyst  |  henson at cpp.edu
California State Polytechnic University  |  Pomona CA 91768




More information about the users mailing list