We have our Kerlink installed on our rooftop. Sending messages from our devices to the server work perfectly. However when sending downlink it seems <10% of the downlink messages will reach the node.
We also have a IMST IC880a + RasPi gateway, which seems to handle the downlink perfectly.
The difference is that the RasPi is connected via wifi, and the Kerlink is connected via 4g. Could this be a latency issue where the connection time required is larger then the Rx windows, 1 and 2 seconds?
Does anyone have experience with working downlink and 3g connected gateways?
The same issue over here. This happens on the margins of the full range and unfortunately we haven’t had a chance to compare it with any other gateway. We have a Multitech Conduit as well which would make a good comparison.
It’s strange that a signal transmitted with 11dBm (max of 20dBm) could be heard by the gateway on the uplink however, a signal transmitted with a 27dBm cannot be heard by the end-node on the downlink. 16dBm is a long of dBm
There are some issues such as wider band on the downlink (500k) in comparison to uplink (125k) which would cause around 7dBm of loss on the downlink. Some imperfect parts on the end-node in comparison to a well made gateway is another hunch. but non of them can explain this much performance loss with that many dBms difference in power.
Unlike uplink, timing is also important on the downlink and this is the biggest guess on how things can go wrong in this direction.
I’ll keep you updated if we get lucky and find the bug. let us know how you go.
The issue indeed seems to be caused by the latency. We are running our own ttn backend server on a server in oregon. As our gateway is in NL the latency was pretty high. After moving the ttn server to frankfurt the downlink seems to work fine so far.
I’m having these issues as well, can anyone confirm catuin’s findings that it is a latency issue?
We use Kerlink on Ethernet and Brocaar’s loraserver network server. Our uplink success rate is above 90% and downlink less than 50%. This is especially problematic in join procedure where we get ‘no_free_channel’ reply from RN2483 after 3 retries.
that topic explained a lot, but also assured me that we have no timing issue. tcpdump shows that downlinks are queued immediately (0.1-0.5s) in gateway, both for join request and uplink. It also showed that our downlink power is 14dBm.
Form here I have 2 questions:
How does Kerlink queue messages, if it gets a message that needs to be sent in 7 seconds and then after 1 second another message that needs to be sent after 2 seconds do both messages get sent? Also, how big is the queue buffer?
I also read somewhere that the gateway can send only one message at a time. If this is true, what happens if there are
Could this be just a power issue? The nodes transmit on 14dBm as well, so it seems unlikely, but I can’t think of a third possibility. Also, it is max allowed in EU…
That might be true for @brocaar’s server you’re using, but that indeed raises the first new question you have…
(For TTN, downlinks are only received at the gateway very short moments before they should be transmitted, which for a Join Accept is either 5 or 6 seconds after the Join Request. Probably for several good reasons.)
It just occurred to me that my last post may have an answer to my question…
If GW receives few messages approximately at the same time it will be able to handle them all and forward to NS, but the replies would need to be sent also at approximately the same time. But if the packet forwarder receives a second message before the GW sent the first one it will be lost. This would be true for both @brocaar’s and TTN NS, or any for that matter that do not send a downlink in exact moment when it needs to be sent by a GW (and non can guarantee to do that), because the packet forwarder isn’t using its own queue (yet).
And the fact is we did set up 10 nodes to send at approximately the same time to see how many messages would we loose because of overlapping uplinks.
Then your problem might also be the duty cycle of the gateway. TTN Console will nicely tell you when it cannot find a gateway that can handle the downlink. (Which, for example, also happens when many nodes try to do OTAA at about the same time.) I don’t know what your server does in that case. (This is one of the reasons TTN’s fair access policy allows for only a few downlinks per day.)
Also beware that current gateways are half-duplex. When transmitting a downlink they cannot receive any uplinks. (Yet another reason for TTN’s fair access policy to limit the downlinks.)
…but TTN’s network server is waiting until the very last moment, allowing for a little network delay.
So we are almost positive that our problem is the lack of Packet Forwarder queue, math claims that the duty cycle should be no issue. When we test one node our ACK rate is above 98%. With 3 nodes transmitting at the (approximately) same time it is around 85%, and it was around 50% when we had 10 nodes. However, we will not be sure until we have a GW with a queue.
According to Kerlink, they will release FW upgrade that has Packet Forwarder with a queue within 2 weeks! Hopping for some great results then
Perhaps off topic, but I’m not aware if any other manufacturer is working on this too, so if anyone has any info I’d like to hear about it, especially for Multitech.
Also I’m aware that TTN is planing to replace its Packet Forwarder with a different handler, but I don’t know how would it work, would it need a queue to avoid these situations and would it have one.
But again, @MAlen, the problem you’re facing is not an issue for TTN. TTN nicely waits until the very last moment (for some even too long, when network latency is high).
So, TTN basically implements the queue server side and does not need the packet forwarder to be smart; apparently things are different in the non-TTN server you’re using. (And maybefor Loriot as well.)
You can get a queuing packet forwarder for Kerlink at this site You will need to put the files at the correct location on the Kerlink yourself. Supports both the new TTN protocol and the Semtech protocol.