Receive Delay LoRaWAN

Hi,

I am currently creating my own node firmware (rfm95) and have major problems with the downlink delays. I have already set the receive timeout to one second (maximum adjustable value for sf7), but I still catch the downlink windows only every few times and have to change the actually “agreed” values a lot. for example, I wait for the join accept a whole second less.

I realize that I have to subtract the delays due to the own code and the reaction of the transceiver, but these should not vary between retries. Then there is the time on air and the delay of the gateway/server connection.

To what extent are these compensated on the server side? I’m not quite sure what effective delay to expect at the end of the day, since all these factors are involved. Does anyone have any advice?

Creating your own firmware is quite an undertaking!

The best way for you to resolve this would be to get a known good device with plenty of logging and a gateway with plenty of logging and see how accurate you need to be (or not). Many devices will start the receiver a few ms before hand to pick up the preamble.

What is the MCU you are using?

The received data is timestamped upon reception by the hardware (sx130[128]). That exact timestamp is used to base the start of the transmission on. Any delays introduced by the gateway OS, network link or backend servers do not change that value and are not relevant for the timing for the reception window.

I use an atmega328p @ 16mhz (custom firmware instead of lmic for full documentation and configurability for special applications as part of a thesis).

So the timestamps shown to me in the console are only those from the gateway’s rx/tx? because according to them the rx1 delay would be only about 300ms instead of one second for some transmissions for example.

That would be the time the gateway was told about the transmission.

The gateway is then responsible for figuring out precisely when to transmit - as it knows precisely when it heard the uplink. This eliminates any variability in network latency.

Trying to figure this out by looking at the console is like trying to ice a cake whilst blindfold, you need to see what is going on on a device.

If you have an ATmega328 you can squeeze in MCCI LMIC if you disable ping and beacons. Alternatively use Matthijs Kooijman’s LMiC which he’s just deprecated but will give you more than enough functionality to see the timings. You can insert some GPIO waggling so your oscilloscope can show when the code did something and what the results were on the SPI bus.

Which bit of the LoRaWAN specifications have you read so far?

Nope. The timestamps regarding reception and transmission are numbers buried in the contents of the packets. And if you are looking at join (accept) the downlink will be 5 or 6 seconds after an uplink.

Although it seems that you have greater success, novag also made some effort.

In my setup, sometimes at sf10 SlimLoRa works. I hope in the following days to re-examine the code.

In the meantime, MCCI LMIC seems fine even for 32kb MCU if you have simple application.

First of all, thank you for all the answers. If the server and gateway ensure that the downlink follows the uplink exactly after the agreed delay, it must be due to delays in my code or my hardware. What still irritates me is that the behavior differs between attempts with the same frame, data rate and channel. I will have a look at the other LMICs, but since the problem is in the runtime of my own code I will probably have to resort to GPIOs and measuring the timing by oscilloscope.Since it was asked: from the data sheet of the rfm95, the lorawan specifications and the regional parameters I think I have read everything that concerns lora/lorawan class a. But there I only found the default delays, nothing about whether there are additional factors to consider, only that the rx window must be sufficient for the detection of the preamble.