SP metadata cache keeps growing

Spencer Thomas Spencer.Thomas at ithaka.org
Wed Feb 26 16:36:02 EST 2020


Those files are partially loaded.  (I saw this while debugging a new shibd deployment.). That's a large metadata file, and will take shibd a significant amount of time to validate.  If you shut down the process before validation finishes, the file remains on disk.


--
Spencer Thomas
Technical Architect / JSTOR and Artstor
ITHAKA <https://www.ithaka.org/> / 301 E. Liberty St, Suite 250, Ann Arbor, MI 48104
Email: Spencer.Thomas at ithaka.org
Voicemail: 734-887-7004
 
 
 


On 2/26/20, 3:20 PM, "users on behalf of Peter Schober" <users-bounces at shibboleth.net on behalf of peter.schober at univie.ac.at> wrote:

    Hey,
    
    Someone from our community asked about an ever-growing cache directory
    filling the local disk with copies of remote metadata. Something like:
    
    -rw-r--r-- 1 shibd shibd 52M 26. Feb 08:57 aconet-metadata.xml
    -rw-r--r-- 1 shibd shibd 52M 25. Feb 11:09 aconet-metadata.xml.0194
    -rw-r--r-- 1 shibd shibd 52M 25. Feb 09:45 aconet-metadata.xml.0d83
    ... more of the same here ...
    -rw-r--r-- 1 shibd shibd 52M 25. Feb 10:43 aconet-metadata.xml.fcf0
    -rw-r--r-- 1 shibd shibd 52M 25. Feb 10:50 aconet-metadata.xml.fe78
    -rw-r--r-- 1 shibd shibd  44 26. Feb 10:22 aconet-metadata.xml.tag
    
    Amassing 2GB so far and growing.
    
    (It might well be that they restarted shibd dozens of times while
    trying to get something else to work, so those timestamps are possibly
    not related to automatic refresh runs.)
    
    They also claimed that restarting shibd took too long to finish
    according to some service manager timeout (I'm thinking the old
    SHIBD_WAIT thing from the SysV starup scripts, but that doesn't match
    the OS being CentOS 8, which means systemd as service manager) but
    that stopping and starting shibd in seperate steps only took a few
    seconds -- in both cases loading metadata the size of above
    (containing all of eduGAIN) from the same configuration[1].
    
    The configured MetadataProvider uses a Signature MetadataFilter but
    with verifyBackup="false" set, so I can't explain the slow (re-)start
    other speculating that this is related to the cache issue, e.g. the
    fact that there are so many backup copies means they're not found and
    hence (re-)created at start, explaining the slow startup times.  But
    why that would only manifest at restarts and not start && stop I have
    no idea. (Bear in mind that maybe the issue report to me was confused
    and/or maybe I caused some additional loss in translation.)
    
    I didn't find anything in the archives and nothing in Jira about
    amassing metadata backup copies, using a search like:
    project in (CPPOST, SSPCPP, CPPXT) AND text ~ "metadata" AND text ~ "cache"
    
    I'll try to get a look at the logs but thought I'd ask here in case
    this sounds familiar to anyone here.
    
    Cheers,
    -peter
    
    [1] The metadata provider used should be a verbatim copy of the second
        example from our docs at
        https://wiki.univie.ac.at/display/federation/Shibboleth+SP+3 
    -- 
    For Consortium Member technical support, see https://wiki.shibboleth.net/confluence/x/coFAAg
    To unsubscribe from this list send an email to users-unsubscribe at shibboleth.net
    



More information about the users mailing list