cantor.2 at osu.edu
Wed Apr 10 17:39:49 EDT 2019
On 4/10/19, 5:22 PM, "Paul B. Henson" <henson at cpp.edu> wrote:
> I guess I will look into that if I can't find a better resolution. Although given the FailoverConnector would be connecting
> to the same systems as the primary connector, it seems it would also be vulnerable to the connection timing out and
> not being retried during a load balancer update, unless I guess if it didn't use a pool of connections and just opened a
> new one every time, which wouldn't be a performance issue if it was only ever used when the primary one failed.
Maybe the pooling needs to work like JDBC pooling. There's an explicit "testOnBorrow" option in most pools that allows it to (try to) never hand out a stale connection to the client and that's the only way I've ever made JDBC client code work (and I never really have because they seem unable to actually detect stale connections without hanging, but that was the closest I ever got).
I think the problem with retries is that it's almost impossible to have a good answer whether to do it or not at a layer this high. I think the *effect* would be to multiply the overall timeout more than correcting any problems in the vast majority of cases. That's I guess what I meant. It would handle some rare cases that should be fixed in the infrastructure at the cost of allowing very slow queries that would do much more harm to a production IdP. Takes it totally down in fact, you get LockTimeoutExceptions under load and a total meltdown.
I did not, FWIW, understand that the response timeout had the effect of not causing an "error" but just causing an empty result. That to me is quite weird. It ends up the same in a lot of situations including my own due to surrounding settings but I wouldn't have expected that to be the case. And that seems like it would definitely make most of the client side redundancy options very minimally useful since a timeout of that sort would be really common as a result of many problems.
More information about the users