SP metadata cache keeps growing

Nate Klingenstein ndk at signet.id
Wed Feb 26 16:33:30 EST 2020

FWIW, SAMLtest is at 15000+ providers, and while the disk is getting stuffed, it's with garbage like Tomcat logs:

[root at ip-172-31-28-15 home]# df
Filesystem     1K-blocks     Used Available Use% Mounted on
/dev/nvme0n1p2  20959212 10516560  10442652  51% /

[root at ip-172-31-28-15 tomcat]# du logs
1598396 logs

    <MetadataProvider id="SAMLtestFolder" xsi:type="LocalDynamicMetadataProvider" sourceDirectory="/home/mdupload" minCacheDuration="PT25S" maxCacheDuration="PT25S" cleanupTaskInterval="PT25S"/>

It loads metadata to disk once during the initial upload and then uses the LocalDynamic metadata provider, so I would definitely point to one of the other differences if this is occurring.

Take care,




The Art of Access ®

Nate Klingenstein | Principal


-----Original message-----
From: Peter Schober
Sent: Wednesday, February 26 2020, 1:20 pm
To: users at shibboleth.net
Subject: SP metadata cache keeps growing

Someone from our community asked about an ever-growing cache directory
filling the local disk with copies of remote metadata. Something like:

-rw-r--r-- 1 shibd shibd 52M 26. Feb 08:57 aconet-metadata.xml
-rw-r--r-- 1 shibd shibd 52M 25. Feb 11:09 aconet-metadata.xml.0194
-rw-r--r-- 1 shibd shibd 52M 25. Feb 09:45 aconet-metadata.xml.0d83
... more of the same here ...
-rw-r--r-- 1 shibd shibd 52M 25. Feb 10:43 aconet-metadata.xml.fcf0
-rw-r--r-- 1 shibd shibd 52M 25. Feb 10:50 aconet-metadata.xml.fe78
-rw-r--r-- 1 shibd shibd  44 26. Feb 10:22 aconet-metadata.xml.tag

Amassing 2GB so far and growing.

(It might well be that they restarted shibd dozens of times while
trying to get something else to work, so those timestamps are possibly
not related to automatic refresh runs.)

They also claimed that restarting shibd took too long to finish
according to some service manager timeout (I'm thinking the old
SHIBD_WAIT thing from the SysV starup scripts, but that doesn't match
the OS being CentOS 8, which means systemd as service manager) but
that stopping and starting shibd in seperate steps only took a few
seconds -- in both cases loading metadata the size of above
(containing all of eduGAIN) from the same configuration[1].

The configured MetadataProvider uses a Signature MetadataFilter but
with verifyBackup="false" set, so I can't explain the slow (re-)start
other speculating that this is related to the cache issue, e.g. the
fact that there are so many backup copies means they're not found and
hence (re-)created at start, explaining the slow startup times.  But
why that would only manifest at restarts and not start && stop I have
no idea. (Bear in mind that maybe the issue report to me was confused
and/or maybe I caused some additional loss in translation.)

I didn't find anything in the archives and nothing in Jira about
amassing metadata backup copies, using a search like:
project in (CPPOST, SSPCPP, CPPXT) AND text ˜ "metadata" AND text ˜ "cache"

I'll try to get a look at the logs but thought I'd ask here in case
this sounds familiar to anyone here.


[1] The metadata provider used should be a verbatim copy of the second
    example from our docs at
For Consortium Member technical support, see https://wiki.shibboleth.net/confluence/x/coFAAg
To unsubscribe from this list send an email to users-unsubscribe at shibboleth.net <mailto:users-unsubscribe at shibboleth.net> 

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://shibboleth.net/pipermail/users/attachments/20200226/5a36a00d/attachment.html>

More information about the users mailing list