This is likely to only fit in airtime allowances if the range is close enough to use faster spreading factors.
You need to consider not only the packet duration; but also that when splitting packets you incur protocol overhead in each split packet.
It’s not permitted to ack every uplink packet; doing so would quickly exhaust your downlink airtime allowance. As a result, if you want to ack things, you’re going to have to do a much less frequent ack covering a whole lot of uplinks, showing which were received and which are missing - eg, do a bitmap covering 128 or 256 or so.
In the end, there’s a fair probability that your use case is not a fit for TTN.
In there a specific protocol you are referring to?
We plan to send the ACK only for the first message wherein first message contains the details of following messages.
Once the entire data is received we send details of missing messages. This loop continues until all the messages are received.
Please suggest a if there is any pre existing protocol which can be used for this purpose.
This is exactly we plan to do. “You could write the code to split the payload in to smaller manageable chunks. And then write the code to handle resends” Can you guide further on this, is there a preexisting protocol or libraries to do this.
After compression and data optimization our data is around 1KB.
Splitting data should be done by your firmware guy.
Its pretty easy, SF9/125kHz is 1760 bits/sec with a maximum payload size of 53bytes.
So divide 1028bytes into 53 byte packages, send each package, after the last package ask the server which ones it got and didn’t get, then you can re-send the missing pieces.
@descartes If stack space is a premium you could always just re-parse your data set to find the missing package(s). No need to store all the packages as separate objects in ram. Assuming the data set is being stored locally on memory.
Also, 8192 bits (plus overhead) transmitted 1760 bits/sec with a 1% airtime usage is going to take 780+ seconds. Not that bad for a once daily transmission. But not allowed within TTN. If its a fixed station with good reception you can bump up the bandwidth or use auto bandwidth.
TTN fair use policy is 30s per day uplink with max 10 downlinks… so several weeks if not breaching FUP…also just ‘cause regs say you can run up to 1% duty cycle, does not mean you should… if everyone went to full 1% the airwaves would be jammed, also you need to allow for collisions & interference driving retries so much longer may be needed! For a more robust and socially responsible scenarios design your applcation for say 0.1% with only rare and exceptional bursts above…
The proposal was actually for a hair under 8 seconds of airtime a day, the 780 seconds being the period of time such a transfer would take at 1% duty cycle. Doing it yet more spaced out throughout the day would indeed be wiser.
Also with regard to that proposal there’s no reason to “ask” the data backend which packets were received, rather since every LoRaWan uplink is an implicit “do you happen to have anything for me?” question, the data backend should very infrequently respond with a downlink having a bitmap listing those present/still needed, especially when that tally has notably changed. A good design would let the bitmap stretch a few days into history such that only maybe 4 downlinks a day are needed to re-mention packets which are still missing, maybe even less
It’s the sort of thing that with real care to get right might fit, but with the slightest mistake or adverse conditions it won’t.
Really one should start by evaluating the unstated application purpose against the idea of LoRa - in particular the tolerable latency has to be considered.
As long as such critical information is missing, best guess is that the application need is not a fit for TTN or even LoRa at all.
Actually hang on a mo - just realised what triggered my FUP reaction. This is assuming
Which is fine if SF constrained in such a way (SF7, 8 or 9 only) or if OP say in US - where max SF is limited to SF10 due to dwell time limits, however, if in most other regions then use of SF11 or SF12 also possible in a true LoRaWAN system, and indeed likely if node is a significant distance from GW or if local GW goes offline or is masked such that node falls back to using a more distant one with higher SF in play, and given approx 2x ToA for each SF increase at SF 11 or 12, the FUP would indeed be breached! If some messages lost and retry needed then even SF9 or SF10 might have a problem depending on conditions.
Breaking into smaller payload sizes would not help as overhead on each smaller message would then make matters worse. Fact is would need to see what could be packed into longest payload when assuming SF12, and then potentially aply some compression techniques.
That’s exactly what I meant by the whole idea possibly being workable in some situations but failing with the slightest implementation mistake or adverse conditions
My guess is that the asker also has a latency requirement which is going to preclude the whole plan.
Application brief description: We have a prototype board which is connected to multiple sensors and currently sends the data over 4G to cloud. This board is to be kept in rural areas where we have connectivity issues, so trying to use LORA for this purpose.
Our prototype have enough memory on board to store data for multiple days.
We are in India and believe SF11 and SF12 is allowed