I think you should aspire to compliant stacks - as I don’t think me using LoRa-node on an STM based device (GNSE anyone, or Dragino LoRaST) makes it certified.
Whereas I believe using a SAMR34 is certified try as I might to break in to the closed source library, whatever I may add to the outside and whatever Semtech think about the lack of NVM making it incompatible going forward with LoRa-node.
As you can see above, we are aware of the state of the Arduino LMIC, as is Terry and I’m sure it can only get better.
Once I change one line in the application code the stack isn’t certified anymore. As you are well aware the cost of compliance testing (Lora Alliance membership + test house fees for every revision of the code) are steep and the process is time consuming. Even large vendors like Microchip don’t release new firmware often with good reason…
So I read your message as we should use a ‘LoRaWAN compatible’, potentially certifiable stack.
Not to be nitpicking or starting a discussion but i.m.o. the term compatible is too weak (and prone to discussions). Compliant would approach better what is actually meant here, especially for the maker community (even though true compliance requires that the compliancy has to be verified). It is stronger than compatible and weaker than certified.
Technically, yes. Practically, not really. Your end device as a whole will get more easily certified by similarity. But I’m not asking everyone to only use LoRa Alliance certified end devices on The Things Network, but mostly to use LoRaWAN certified stacks.
To my understanding (assumptions included) this means the following:
current-parameters are the parameter values that were configured (either explicitly specified or default values) when the device was added (created) in the Console or via CLI.
Above values can be updated via the console or CLI.
Current value means ‘as known by the NS’ (which in practice can be different from the actual value used by the node e.g. when a non-compliant node does not handle MAC commands and values defined during addition of the device and what is currently used in firmware do not match).
The value of (some of) the above current-parameters can be overridden by a response (from the device) to a DevStatusReq MAC-command request, with the actual values on the device.
If the DevStatusReq frequency is set to 0 (i.e. no status requests will be sent) current-parameters will allways be the values as described above at 1. and 2.
desired-parameters are the parameter values that the NS wants the node to use.
Those values are configured/determined independently at the NS level.
If the current value of parameters matches the desired value then no MAC commands for setting/adjusting the parameters will be sent by the NS.
If current and desired values do not match the NS will send appropriate MAC command(s) to request the node to set/adjust the parameter value(s).
Questions
Are above interpretations / assumptions correct?
Is there an overview of which parameters can be updated/modified after initial addition (creation) of the device and be made effective and which not?
When is the first DevStatusReq request sent by the NS, is this always directly after the first uplink? In case of OTAA, is it sent directly after the join response or after the first uplink thereafter?
Is all of the above identical for LoRaWAN versions v1.0.2, v1.0.3, v1.0.4 and v1.1.x? (or may choosing a different LoRaWAN version when adding the device impact this behavior?)
Have you checked the specification to see the wealth of information that is returned in the DevStatusAns? Spoiler: battery status and link margin.
So there is not a lot that will adjusted based on the response.
The current-parameters are based on the device settings, the desired-parameters are calculated based on the LoRaWAN and MAC version, the regional parameters and the TTN frequency plan with some TTN specific setting (RX1 timing etc) mixed in.
just checked it is 4.0.0.
Do you have any experience with debugging and mentioned “?” problem? I suppose MCU timing gets off and nothing works. Poor old friend 328p getting a bit old .
I make some nodes from time to time but had some pause for few years. What is meant by this? Should I use only ready-made devices with stack integrated (use UART or smth to configure and data to the module and not handle RF stuff in main MCU)? For example RAK3172, CMWX1ZZABZ or Seed LoRa-E5 would count into this category?
Update:
turned on verbose output as @johan suggested, this is what I get for one uplink. Seems that the MACCommand.LinkADRReq is what’s being sent to the node.
I have a theory but I need you guys to point me in the right direction:
Node turns itself off after everything is done (completely off, no power to MCU) so even if node received and processed Adaptive data rate mac all changes would be lost. Might it be possible to save ADR settings (is it one or multiple?) to EEPROM. I do the same thing with counter, I use LMIC.seqnoUp = EEPROMReadCounter(0); to read from EEPROM (my function) and set it to library.
I would like to fix it in a proper way but this node should be deployed tomorrow in a potato field and it is 2h drive there. Is there any way to disable ADR for node from CLI (haven’t used it yet) to reduce downlink count until I fix it on the device side?
That is not LoRaWAN compliant. Your node should retain settings provided by the network between transmissions. That is part of what Johan is referring to with a certified stack.
You can save the value, however your next uplink should also include the correct MAC response code so the network knows the setting has been accepted and processed.
To make your LoRaWAN stack compliant you need to save a lot more than just the uplink frame counter. At least all values modified by MAC commands need to be saved and restored and both uplink and downlink frame counters.
Turning ADR off might work around the current issue but you are probably going to run into a new downlink MAC with new issues.
Be aware your current node will probably cause a lot of downlinks resulting in too many downlinks for the gateway(s) near where it is deployed. That might work for one node but doesn’t scale so if you deploy multiple nodes of the type, or multiple people deploy nodes with these issues there will be something like a denial of service on the gateway(s).
Totaly Noted!
As I mentioned above, this is old hardware for 3 years ago and currently working on the new one, this should be a temporary solution as I don’t have any other nodes to deploy for now. I only need two of them and for the time being of developing new sensor hardware without total shutdown and easier to use LoRa module.
Does anyone have some referrals to documentation about saving MAC command modified values?
A LoRaWAN compliant stack also means that the stack shall handle both uplinks and downlinks and also properly handle MAC commands.
A library like MCCI LMIC is regarded as a compliant stack but a library like TinyLora is not LoRaWAN compliant because it only supports uplinks and does not support downlinks and MAC commands.
Retaining session state is not necessarily part of the stack and e.g. also depends on whether the used microcontroller (automatically) retains memory state during sleep cycles or not (but a LoRaWAN compliant end device shall support it).
For instance in MCCI LMIC there currently is no (generic) support for saving and restoring LoRaWAN session state. It can be done but the library itself does currently not provide an API for that.
Of course another option is to use a compliant LoRaWAN module with builtin MCU and builtin LoRaWAN stack. These are the types with serial (instead of SPI) interface and can be used with a separate MCU that runs the application. Those modules handle the LoRaWAN protocol as an independent subsystem.
Implementation of maintaining (storing and restoring where needed) session state will depend on and be implemented different per LoRaWAN stack (e.g. different implementations of LoRaMac-node, LMIC etc), development framework and even MCU family. There unfortunately is no unified solution that suits every LoRaWAN stack, MCU platform and end device hardware.
It is part of the stack. For controllers that retain ram in sleep this comes naturally. For controllers that don’t retain ram the implementor of the stack for that controller will need to add the required logic.
That’s because a lot of (most?) microcontrollers retain ram in low power sleep. ESP devices are a notable exception that causes many headaches and requires work arounds.
Those modules require (a small amount) of power to retain state as well, so total power down combined with timer module to restart won’t work. Save to eeprom may be an option for some modules for a limited number of times.
(Some) state should be retained even when the end node is powered down (e.g. devnonce), a requirement for LoRaWAN v1.0.4 and v1.1.0.
In fact devnonces should already be stored (and not re-used) for 1.0.3 (and probably earlier), but who is currently storing (a theoretical max number of 64k - 1 ‘random’) devnonces on their end device? On average probably no-one. For v1.0.4 and up the devnonce luckily is an incrementing number and not a random one.
So apparently these modules are currently (still) not a good solution for ultra low power battery operated solutions.
The specification (1.0.3) explicitly states random devnonces should be used and that the network server should keep track of a certain number used ones to prevent replay attacks. The number is not specified so could well be less then the entire range as implemented by TTN allowing reuse of them at some point in time.
Version 1.0.4 and 1.1 demand increasing numbers to be used and requires permanent storage that survives restarts/reflashing of the firmware.
So for standard versions < 1.0.4 ram retention is well within specs.
That depends on the requirements. I’ve got nodes with modules that have a battery live of multiple years. Some of them use very little, I was just warning it isn’t zero and the modules need continues power.
On another thread, Johan pointed out that to hold a table of all nonces for a device would require 4GB …
Not sure I understand this - if you have sleep current around the 5uA mark, that’s a heck of a lot of battery life …
And on some configurations, the power required to burn the info in to NVM (time being the most likely larger part of the consumption), it may be cheaper to just go to sleep.
this goes a bit off-topic but I am currently looking for such a solution. I am thinking about the Lora module (integrated stack) + external MCU. Can you elaborate on what modules have you used and the MCU combination? I know SMT32 is the way to go in most cases but the programming learning curve on those are steep.
RN2483 with PIC18F14K50. Because the LoRaWAN stack runs on the module there is no need for a 32 bit controller and this 8 bit series is extremely low power.
Controller is set to sleep and wakes using a level change interrupt when the RN module sends output. As the RN modele can be put to sleep and it sends OK when waking this works well to keep them synchronized.