The important thing to understand is that the issue is the very long amount of time it takes ADR to recover from the disappearance of a gateway that had been near a node, and start instead to use a spreading factor that can reach one that’s further away and still operating.
As such, the pattern of outage is much more important than the overall uptime - long outages are fine, reboots and upgrades of a few minutes are fine, it’s more-than-rare outages which last more than a couple packet intervals which are a problem.
A gateway that ran for 3 months, was offline for 3 months, was back for 3 months, etc might be a bit annoying to someone trying to figure out if installing another gateway in the area was worthwhile, but this is the sort of pattern that ADR can deal with, because the need to heal connectivity happens only a few times a year.
In contrast, a gateway that was down for an hour or two every day would be a problematic denial of service attack on the idea of ADR and TTN’s current naively trusting implementation thereof.
There are a number of things that a node owner free to develop their own firmware could do to work around a broken ADR situation, such as blind probing at higher spreading factors even while retaining the nominal ADR spreading factor to return to. But these are non-standard and would require customizing node firmware, possibly firmware that’s already field deployed. I’d guess they’d also cause the node to fail LoRaWAN certification - you’re not required to use ADR, but to certify in ADR mode you should probably be using the server configured data rate or backing off in loss of connectivity, not randomly jumping around. And there’s not really an appropriate way to “mark” these uplink packets, either - if you occasional hop down to SF12 (or SF10 where that’s the limit) to probe, what you’d like is the server to respond to that only if it hasn’t heard anything else from you in a while but to not feel the need to burn dowlink airtime if it has been getting your packets. There’s no bit that means that, only one that demands a reply, so you can’t do something like send one out of 6 packets at a higher SF as a probe when the server could tell you to change SF, but rather only send a link check packet at a rarer duration where demanding a downlink is acceptable.
If I had to work around an unreliable nearby gateway with TTN’s current implementation, what I’d probably do is implement my own ADR-like algorithm at application level (while not being in ADR mode as far as LoRaWAN in concerned), where I’d either manually tune the settings for a given node such that downlinks are sent to it to maintain a spreading factor where it could reach a particular reliable gateway (probably one I maintained) or else I’d automatically calculate a gateway reliability score and use downlinks to command a datarate that reached a gateway of sufficient reliablity. Including the diversity of uplink frequencies reported from a gateway in its reliability score would also be a way to avoid being mislead by single channel abominations that are only present for one frequency and spreading factor combination and missing for all others.
But there are real problems with doing this outside the server. For one, gateway reliability could only be calculated based on packets from nodes in your application, not based on overall traffic flow or better yet the periodic status messages. For another, you remain at mercy of the server’s naive willingness to assume that gateways are equally trustworthy in sending downlink messages on request, rather than routing the occasional packet through a nominal less ideal (in the RSSI) sense gateway to see if that works better… or ceasing to send unacked MAC commands.
The place to implement solutions properly would be in the server. And because such algorithms would be experimental, that really points towards an architecture that makes it more realistically possible to run your own server to operate your nodes in the strategically best fashion. Something more like community at the level of packet broker, with TTN’s server instances as merely the default for people who don’t care to tune further.
However, it’s hard for an individual or limited user to access Packet Broker, and there may be practical problems with the size of the “routing table” that would be needed to let everyone do so at will. It may be practically simpler to just point one’s own gateways at both TTN and one’s own servers - which removes both the problem and benefit of getting traffic for ones own nodes through other’s gateways. Doing so leaves the problem of servers not being able to plan around downlink collisions. But ironically, packet broker (or at least TTN’s interaction with it) doesn’t solve that problem either - there’s no looking around for a gateway that’s actually free to send the requested downlink, and no shared accounting of donwlink airtime availability, just assignment based on the server’s private knowledge alone, and subsequent success or failure, rather than trying another if the first reports a NACK.