Metadata Typo Causes Integration Headaches

Cantor, Scott cantor.2 at osu.edu
Tue Sep 18 09:36:19 EDT 2018


On 9/18/18, 9:21 AM, "users on behalf of Marvin Addison" <users-bounces at shibboleth.net on behalf of serac at vt.edu> wrote:

> Wholeheartedly agree that schema validation is a best practice for
> catching simple mistakes, but avoiding hand-editing XML is my goal for
> commonplace integrations.

Unfortunately most of the time what you get from vendors is incorrect (semantically, not syntactically) metadata that has to be modified, and scripting is difficult because most of their systems don't follow any real conventions (I can script ADFS, Shibboleth, and SSP). I haven't come up with a better way to handle them, other than moving to LocalDynamic to isolate the new SPs from all the old.
 
> Not surprised -- I was groping for words. Let me try again: just
> because you have what appears to be the right certificate defined in
> your metadata, there's some complex policy machinery that can
> effectively remove it from consideration: usage constraints, algorithm
> constraints, etc.

Usage, yes, that's just "is it use="X"" or not, generally easy to eyeball and important up front, you want to know if the keys are there and what they're going to support.

Algorithms don't come into play much, that's mostly if it's EC or RSA and you just don't see EC much if at all. It didn't sound like that was your issue here and in practice it just doesn't come up much. The one that does come up occasionally is name filtering but I've seen that more with my SP when picking decryption keys.

I will say my best practice is always to avoid request signing, and where possible to just omit any signing-only key from the metadata. I want to know if something is signing when I don't want it to be (lack of a key will cause it to fail) plus it means I don't have to care as much about strongly vetting the key. Encryption keys matter, but they don't authenticate the SP to me.

-- Scott




More information about the users mailing list