Since ADR is one of the more cool features of LoRaWAN, I am trying to comprehend how it actually works, and should be used.
When reading this page: https://www.thethingsnetwork.org/wiki/LoRaWAN/ADR I get quite confused about the sentence “Only static nodes should use ADR”. I would have guessed that ADR was most relevant for nodes that moves around - ex. a node installed on a bus. Could someone clarify why ADR is only relevant for nodes at fixed positions?
If the backend uses ADR to tell a node it can use some specific (minimum) transmission power or (best) spreading factor at some location, how would the node know which settings to use after it has moved to a new location?
Exactly. As I understand ADR, its real value lies in the ability to adjust to the network environment. If a node is at a fixed position it would only need to be configured with bandwidth and spreading factor at the initial connection phase (in this case the backend system doesn’t know the location of the node either. I guess that is inferred by e.g. signal strength during connection). But if the node is moving physically around, then it could be 100 meters from a gateway at one time, and 10 km at an other time. I would assume that ADR would be perfect for keeping an optimal relation between node and gateways in the dynamic case. The node will be updated with the new configuration when ever the backend system sends the new instructions e.g. as piggy backing on a downstream ACK message (or if it’s forced). The backend system has an idea of how far the node is way, since upstream messages has RSSI (and probably other characteristics as well).
There might be some basics I’m not aware of or simply just misinterpret.
Yes, as in: adjust when gateways are added or removed.
(And for newly deployed nodes: to adjust to an existing environment after some initial uplinks.)
No: the network can change.
If there are many nodes, additional gateways can be added and then ADR will tell the nodes to adjust to that more extensive network, lowering chances for collisions by reducing their transmission power in order to only reach the gateways that are near. Or if a gateway fails, nodes will eventually validate if their transmissions are still received, and if not, adjust their parameters to reach gateways that are further away.
…which assumes the uplink was received to start with. But ADR tries to minimise the power and maximise the data rate, which implies a node could easily become out of reach. When moving out of reach of all gateways, it would take a node quite a long time to know its uplinks are no longer received, and to make it increase power or lower its data rate.
(Note that moving nodes indeed refers to things on vehicles, not to nodes that are only moved a few times in their life time.)
And sure: ADR would also be quite relevant for moving nodes, but how to implement that? (Assuming nodes are not sending continuously, and while still limiting the number of downlinks.)
What would be the advised strategy for a moving node then?
ADR is not an option and defaulting to SPF12 is not allowed.
In particular while on the move, there will be gateways that are reachable using SPF12 but not SPF7.
True…and if you need 100% coverage, this is the wrong technology, but given that a moving node (even if battery is no issue) should not be transmitting all the time (and ack messages should be limited), maximizing the chance of a message being delivered would be nice.
I could be wrong on this, but I’m sure I read before about the challenges of say SF12 and having moving nodes? issues with object moving closer or further away during the long transmit period
Okay, not arguing with you, just trying to learn: what reason would a mobile node have to change to anything other than SF12, if that was the start setting.
When I did the test with TTN and KPN while traveling by train, the Marvin code had defaulted it to ADR (I didn’t know what ADR was back then and had used the example code). That caused the node to end up with using mostly SF12, with some good results. One gateway being 16km (10 miles) away from the moving train/node.
if the node is close to a gateway it can transmit in SF7 so the same amount of data will be transmitted in a shorter period … saving energy
also freeing the channels / gateway(s) faster
My TTN experience is that the TTN core sends downlinks with ADR commands in Frame Header FOpts when a device has set the FCtrl.ADRACKReq bit = 1 in an uplink Frame Header.
This is covered in section 4.3.1.1 of the LoRaWAN specification (v1.0.2). Section 2.1.9 of the LoRaWAN Regional Parameters (again v1.0.2) for the EU states that FCtrl.ADRACKReq should only be set once every 64 uplinks.
So… ADR will normally only operate once in every 64 uplinks. If you have a device that defaults to SF11 and is sending updates at the 1% duty cycle limit then ADR will only operate once every approx. 100 minutes.
This is simply way too slow for a “mobile” device and will result in a lot of lost uplinks if the FCtrl.ADRACKReq uplink is sent during good cover, the device is switched to SF7 and reduced TX power and then moves into poor cover.
So… the guidance is correct… ADR is very relevant to static devices but should not be used for mobile devices.