I use LMIC_setClockError(MAX_CLOCK_ERROR * 10 / 100); I am not sure were I can see the extended receive interval in the debug messages. Anything, I can do to further debug the situation?
After some more tests, I was able to send some data. BUT: it requires
LMIC_setClockError(MAX_CLOCK_ERROR * 40 / 100);
and
#define LMIC_ENABLE_arbitrary_clock_error 1
in the lmic_project_conf.h
This is probably not the way it should be. Any idea?
Thanks for your reply. I am now not able to reproduce the OTAA case. The accept is sent but the device blocks. Nevertheless, the uplinks with 40% error works.
From the numbers in the debug output
rxtime-txend: 62126
rxtime-txend: 124376
seem to be correct. With an 62500 osticks/sec, this is exactly 1 and two seconds. Since the uplink is sent in rx1 and with 40% error it works, it might be too early or too late. Can this be a problem of the TTN-GW-868 gateway?
Downlinks, I assume. The uplinks should work regardless any setting of LMIC_setClockError.
Well, assuming that value is correct, and assuming an extremely accurate clock: yes.
I doubt the gateway will have timing issues. Maybe it would suffer the ERROR: Packet REJECTED, unsupported RF power for TX - 24 but that would not be fixed using LMIC_setClockError, so unlikely that’s also your problem. Or maybe network latency makes the downlink arrive at the gateway too late. Make sure the gateway’s router and application’s handler use the same region.
(Aside: “TTN-GW-868” is too vague. TTN Kickstarter “The Things Gateway”, “The Things Indoor Gateway” or “The Things Outdoor Gateway”? And if it’s the outdoor one: ethernet or 3G/4G?)
I’m afraid there is not much else to investigate. You’ll need an extra gateway or an extra device for more testing. Or peek into the gateway’s log. (See the FAQs in the documentation.)
I trigger the downlink through the TTN console. I see that it is first scheduled in the data of the device and with the next uplink, the message is sent.
I tried with two Adafruit feather boards. It is exactly the same.
Well, we seem to be having very similar problems! Feather M0s, dodgy downlinks, can’t easily test with another gateway until covid goes away…
Is your gateway’s backend mobile network, wifi, or ethernet?
I’m glad you pointed out the #define that is required to enable the clock error to be defined. I was wondering why setting the clock error did not seem to have any effect. Having said that, I’m still not sure it’s making a difference.
I deleted all the versions of LMIC I had lying around, and the Arduino IDE to be sure, and then installed the latest version of the MCCI variant using the library manager. This time I did not make any changes except to set the region to as923, and these lines after LMIC_reset:
LMIC_setClockError((MAX_CLOCK_ERROR * 10) / 100);
LMIC_setLinkCheckMode(0); // Is this necessary?
Joining and downlink performance increased noticeably after going to the latest version of MCCI LMIC. Joining could take 40 - 60 minutes before, now usually less than 10.
I’m asking for downlinks to be ack’d so I don’t have to keep resending them myself and am getting them in between 1 and 4 uplinks usually, occasionally 5 or 6.
I’m checking for changes in LMIC.radio.rxlate_count, and have not logged one yet.
My gateway is mobile network backed and I’m hoping going to a wired or wifi gateway will sort it out.
The point about the gateway and TTN application having the same region is interesting too, but not something I can check because I have no access to the gateway config.
For other readers: note that for many regions LMIC joining uses a scenario to decrease its data rate if a join failed on all channels for a given data rate. So, joins taking some time might also indicate that the Join Request uplink was not received, hence no Join Accept downlink would be sent at all.
So, in any case, make sure you see the orange Activation icon in the device’s Data page in TTN Console (or the timestamp after the device’s Status label in TTN Console), to confirm that the Join Request uplink was received.
I see a bunch of messages with I think little lightning flash icons. I have been assuming those are join attempts from the node.
I’m wondering if I should just go to ABP instead. I can’t see why pre-shared keys would be a problem. If you’re only supposed to join as little as possible, what is the difference between saving and using the OTAA negotiated keys for as long as possible, even over reboots, and ABP? Sorry if I’m messing up some concepts, you can tell I’m new to LoRaWAN.
Given your other forum posts, you want the downlinks to work properly anyway. Also, one may want ADR, which needs downlinks too. So, no, using ABP is just ignoring the real problem. (And it would introduce different problems, like if the device does not properly persist state between restarts.)
For the sake of completeness:
…and: accepted. TTN might reject it if the configuration is wrong (in which case it would never accept it), or if its random LoRaWAN 1.0.x DevNonce was used before (which can be mitigated if one has done a lot of testing).
In the TTN Console Data page, those are OTAA Activations, so: the Join Request (which is only shown in the gateway’s Traffic page) was accepted by TTN, and a new DevAddr and secrets have been assigned. Hover your mouse over the icon to see “activation”.