we are testing an ABP-device with ADR enabled (initially SF9 to match airtime / fair use policy with the payload size (109 bytes) / transmission interval (35 min) initially).
The networking chain works well so far: Gateways are receiving messages, partially forwarding them to the application, which decrypts and decodes successfully.
However - as mentioned above - only approx. one third of payloads are forwarded to the application, even if they are received in the Gateway correctly (!). Also, packet counters are incremented normally (checked in the GW dashboard).
What could be the reason for this?
The airtime is 656 ms and should therefore well fit into the fair use policy with 40 messages * 656 ms < 30s airtime per day.
During the device creation process, we entered all IDs / KEYs which are required for ABP manually. Device addresses start with 26 01 1 (legacy TTN ABP prefix).
What is the SNR of the node like?
What is the RSSI of the node like?
Is it your gateway?
What make and model gateway?
How many packets per min are the gateway handling?
RSSI: -35 dBm
SNR: 10
Yes, it’s my Gateway (Ursalink / Milesight UG85), but public gateways also receive the message
There is almost no traffic, lets say 2 packets / min
What I’ve done is:
Implementing the message encryption / decryption part (spec. 1.1, AES CTR with respective IV, only for application payloads) separately (Python) and testet:
Message that appears in GW + APP console → Frame Payload okay
Message that appears in GW only, but not the APP console → Frame Payload okay
I have to say, the sample size is quite small with just four payloads tested, but it doesn’t look like bit errors
I have cheated here a little bit with separate devices and triggered ABP messages manually every 30s. How is the Fair Use Policy implemented? Is it like a sliding average that measures airtime ratios for example in 5 min bins, or is it more a bucket, that blocks all traffic as the sum of the airtime during the past 24h reaches the 30s limit?
Rather try longer time periods and send one byte only as a testing node. You are only interested if the uplink is successful, not so much the actual data.
Are there a specific need for ABP? Can you not use OTAA?
FUP will have no direct impact to your current testing or use case so ignore that and look for other issues. What might be a problem under these circumstances is Duty Cycle enforcement or in some territories Dwell Time limits but you should be ok for now unless in US in which case if I were you I would investigate Dwell Time. With LoRaWAN overhead 109Byte payload significantly exceeds permitted dwell time.
Note testing with 30s & 109Bytes @9SF is breaching DC so the device may enforce restrictions - you havent told us what you are using - hence supposed Tx’s may not happen - and if device isnt self regulating and you are using your own firmware then you should look to add to protect yourself legally
Yes, no worries - the 1% DC limit will be met under normal circumstances, as the LoRaWAN stack is implemented through LMIC. That were just a few manually triggered messages this afternoon - hope, nobody has measured them - otherwise we need to move inside a shielded private space
Regular long-term tests are carried out with 35 mins packet interval / 1.7 messages per hour as mentioned above.
The described error persists (GWs pick up message, APP is not showing messages and packet counter is also not incremented on the device page) and is reproducible (while sticking to DC + FUP regulations).
I can see the transmitted messages in the waterfall diagram of an SDR RX dongle and on two GW-Live Views in TTN (decrypting raw messages on MAC layer locally also works), but sadly the message is not appearing in the TTN-APP.
Seems that the device and channel (PHY + MAC) are basically fine, apart from message forwarding within the network. Shall I try a private TTS instance?
RSSIs of DUTs listed here between -70 and -90 dBm.
Okay - 1b is a good hint.
ABP vs OTAA: Mainly as a prototype with reduced complexity (as proper implementation of waiting times between JReqs and LP-Modes of the MCU will be the next step).
Just so there is Absolute Clarity, my favourite Vodka when debugging, what do you mean when you say:
“They are received in the gateway correctly” - is this the gateway hardware log and does it explicitly say the CRC is OK?
If they are OK, are they appearing on the TTS CE gateway console page?
“Packet counters are incremented normally, checked in the gateway dashboard” - is this the gateway internal log or the TTS CE gateway console page?
As you are using LMIC, can you do a quick sanity check with a device by provisioning it for OTAA and leaving it alone, no hot wiring of send intervals or anything.
But mostly, can you look at your payload and look to see if you can send some items less often &/or just the delta values - multiple ports make for simple decoding.
Yes, sorry for the imprecise expression: When talking about GW in this post, I am always referring to the Gateway Console in TTS CE. The local Gateway Hardware Log is not accessed.