ADR mechanism not working properly

Dear community,
I have ADR enabled on my LoRa node. The initial SF factor is set to 12. When the node is near the gateway, the SF decreases to 7, but if I move the node away from the gateway, SF doesn’t increase again and the node gets disconnected after some distance. Kindly explain, is this the way it should be or I am doing something wrong. Shouldn’t SF increase if I move the node away from the gateway? Also, let me know how can we change BW in LMIC library (my BW is fixed at 125kHz). Shouldn’t BW also change in ADR mechanism along with SF and Tx Power?

Taking things pretty well in reverse order

  1. please read the documentation and use forum search to better understand the way things are supposed to work.

  2. Channel bandwidth for LoRa and LoRa device’s generally is variable and user selected depending on application and technical need, however, LoRaWAN, is a standards based framework, set of protocols and system architecture that uses LoRa as its physical layer (device to gw air interface), and as a standard it is limited to just how much of the LoRa flexibility is adopted for real world use. Essentially LoRaWAN, as defined by the LoRa-Alliance, is pretty well settled on 125khz cannel bandwidth as an optimum mix of data rate, range and in channel noise etc. As such and to ensure interop between devices and GW’s they both have to use the same settings. If, through some other ADR implementation, your node were to change bandwidth the GW’s would have no way of knowing and would not adapt accordingly. Therefore, with a fixed gw infrastructure ADR must work by changing the two factors that can be varied at the node without impacting gw reception ability…namely SF and Tx power.

  3. ADR works by the NS evaluating a series of received transmissions and evaluating signal quality before then commanding the node to make adjustments, with an appropriate safety overhead to allow some level of reduction in received signal quality risking lost packets. The specific safety level will be set on a network wide basis for all users based on the operators policy, and the period over which the decision of how good a signal chain is is similarly made under an algorithm determined by the operator. IIRC TTN uses approx 20 messages to set values. And updates thereafter.

  4. ADR is recommended for static devices, which can then settle down to an ideal operating configuration under NS control. Devices that move, e.g. Asset trackers etc. are not recommended to use ADR for the very reason you have discovered. Once nominal operation conditions set if you then move the node you risk moving the node out of viable reception range. Movement is possible if it is very slow and allows the NS to continue to monitor the evolving signal degradation and if falling below the network set safe threshold it can then tell the node to increase TX power or increase SF to compensate.

  5. It would appear you have not got a full grasp on LoRaWAN fundamentals and therefore I’m sure most experienced LMIC users would advise to NOT to attempt to start adjusting the fundamentals settings of the under lying firmware unless you are aware of potential impact, and more importantly, given the lower tolerance of TTS(CE) aka TTN V3 to non compliant implementations you may also end up causing issues for GW’s or the NS.

Thank you sir for your reply.

I’m a bit frustrated with this poster, too. However a couple of technical points.

The 125 KHz bandwidth comes mostly from the limitations of the available gateway silicon, which can handle 8 multi-SF 125 KHz channels but only one fixed-SF 500 KHz channel, and secondarily from EU radio regulations which seem to make 500 KHz unusable in that region. In say, US915, all downlinks are at 500 KHz bandwidth, and the single fixed-SF 500 KHz uplink is theoretically available (eg under ADR). In Europe you only have the FSK uplink option. Anyway, neither is frequently used - most traffic is at 125 KHz, but more because that’s what the hardware and/or regulations support.

In terms of ADR, the network server can crank a node up to a faster rate, but it cannot slow down a node that is out of range with the current settings, because there’s no way to get traffic containing such traffic to the nodes - the network can only command reduction in the case where some packets are still getting through in both directions. Thus all nodes that implement ADR have to have autonomous fall-off behavior. However, that may have the time scale of a substantial portion of a day to several days. It’s indeed true that ADR is not meant for nodes that move. It’s also not meant for networks where gateways vanish…

2 Likes

Part of the ADR algorithm defined in LoRaWAN is configurable using ADR ACK Limit and Delay settings.

Modifying these settings can speed up the daratarate decay. However reduced setting also elicit more downlinks from the network. Similar to confirmed uplink usage care must be taken to make proper trade-offs in determining how many missed packets can be allowed by your application and allowed uplinks/downlink in a community provided network.