LoRa <> LoRa. With a few changed parameters, 70 ms airtime quickly become 142 ms air time. The picture as below will show my question in detail. How are the parameters set for LoRaWAN? 13-byte header is clear. I wanted 22 bytes of user data. A total of 35 bytes. What about Coding Rate, Low Data Rate, Header Mode and CRC? The sensitivity does not change. Why then so many parameters?
Thanks in advance for advice.
Nice to see you here, Harald.
Header mode, CRC and also Preamle length setting are descibed in LoRaWAN spec.
Semtech’s AN recommends to enable Low DR optimization when using SF11 and SF12.
Do not use CR4 with SF12 and longer payload as airtime becomes too big and on some stacks you may get timeout error instead of your data. It is also unclear if SX1301 handles the packets with time-on-air > 3+ sec. correctly.
HoBo, thx for jumping in. SF11 and SF12 is a no go. SF11 or SF12 Low DR optimization is a nightmare. The time on air and the energy consumption is much to high.
Header mode, CRC and also Preamle length settings at LoRaWAN spec are fixed and in all networks the same? In France w have two public LoRaWAN, TTN on LoRaWAN and maybe some local operators.
SF7 is okay. Maybe SF8. I have to handle 400 nodes transmitting each minute. I think about something you call a single gateway with LoRa plus TDMA stack. 60.000 ms / 400 = 150 ms. However 70 ms x 60 messages = 4200 ms = 4,2 seconds. On 1% duty cycle 36 seconds are allowed. On SF6 I will achive the 0,1 duty cycle.
And on SX126X you also can use SF5 Btw, SX127X SF6 is not compatible with SX16X SF6.
Yes, at least in all networks in the same region these setting should be same (although you actually can use a bit longer preamble and nothing bad will happen).
So, you’re building your own Link Labs Symphony, but for EU?
Linklabs run in trouble with the 10% duty cycle at the gateway. I do not need a very accurate beacon all the time. A synchronization once a day or each 12 to 6 hours is fine. The drift does not matter. It will be mainly uploaded and fewer download data. Linklabs planned a continually upload and download.
However, the same hardware should run on LoRaWAN as well. LoRaWAN gives us 82% packet loss at maximum throughput. Slotted Aloha will cut the loss by 50%. A simple TDMA will avoid collision with friends on air.
How much re-design of the TTN would that require ?
It seems that Mr. Naumann doesn’t worry too much about server side. So don’t think any redesign will be required.
Just to let you know that I am 25+ years in IoT and M2M. We made M2M in a time where the word M2M was not created. We run services for big German utility companies for ages. Our latest NB-IoT projects are on Microsoft Azure and Huawei Connect Blue.
The server is not a big problem. My problem is the collision of packages. I have to transmit 35 messages each minute by 400 nodes. With a slotted Aloha the number of collisions will 50% less. With a simple TDMA the friendly Aloha fire will be zero.
I suspect most of the forum readers will be aware of what TDMA is and how it works, and I know of examples where its been used with LoRa.
But how is this connected with TTN ?
The same application can run on LoRaWAN and any LoRaWAN network like TTN of LoRaWAN stack on the LoRa module and on the TDMA stack on LoRa as well. 60 messages per hour with 35 bytes and 400 nodes is something that LoRaWAN cannot handle. 60 messages per hour with 35 bytes is lightyears far away from the fair use policy of TTN. I will calculate it for both stacks.
That’s wrong if someone is going to develop a TTN-compatible solution? Or evaluating the possibilities?
I gave my customers the freedom to run LoRaWAN or a LoRa stack on TDMA. If it comes to throughput then TTN and LoRaWAN is not the best option. However, this will be decided by the customers.