SP metadata cache keeps growing
cantor.2 at osu.edu
Wed Feb 26 16:36:16 EST 2020
> Someone from our community asked about an ever-growing cache directory
> filling the local disk with copies of remote metadata. Something like:
It's not the metadata, those are the JSON feeds. The cleanup bug reappeared a while ago and was fixed again in 3.0.
> They also claimed that restarting shibd took too long to finish according to
> some service manager timeout (I'm thinking the old SHIBD_WAIT thing from
> the SysV starup scripts, but that doesn't match the OS being CentOS 8, which
> means systemd as service manager) but that stopping and starting shibd in
> seperate steps only took a few seconds -- in both cases loading metadata the
> size of above (containing all of eduGAIN) from the same configuration.
Restarting is stopping and starting, and even parsing the metadata alone, without the signature check, takes a long time. The systemd behavior is similar to before, you still have to customize the delay using a custom unit file.
> I didn't find anything in the archives and nothing in Jira about amassing
> metadata backup copies, using a search like:
> project in (CPPOST, SSPCPP, CPPXT) AND text ~ "metadata" AND text ~ "cache"
They aren't metadata copies, they're the JSON feeds (just peek at them). This is done because the alternative is to tunnel all that JSON from shibd out to the caller, and that's probably unworkable at scale. The copies avoid race conditions with callers when the metadata is reloaded and the feed recreated.
More information about the users