After some testing i noticed that the ADR is setting the spreading factor on some nodes to SF11 and SF12. Is there a way to stop the lmic’s ADR on SF10 because otherwise there is always a chance to exceed the fair use policy.
Sure, you can override it, but then you run the risk that the device can’t be heard by a gateway. ADR didn’t set it "just because’.
So the best thing to do is put an infill gateway where it will help keep the SF down - and dramatically increase battery life.
And the LoRa-Alliance requires operators to limit the use of SF11 & 12, so a good move overall.
Sure, i agree with you. But over all with the constrains of the fair use policy SF11 and SF12 are making absolutely no sense . Correct me if im wrong, but 13 bytes of header and 1 byte of payload is 14 bytes. Sending with SF11 makes 30min intervals, SF12 makes 60. So building a device with 15 min intervals and ADR activated is risky
I did write some code for LMIC that adjusted the TX interval to keep within the FUP, but there was little (as in zero) interest in it.
Can you re-read this and give it some thought / investigation because you seem to have totally missed the point:
To expand on it just in case it’s not obvious, there is no point setting the SF to a value that stops the device being heard - you can make the change in firmware easily enough in the main loop - but it will then be a small lump of plastic that transmits but won’t be heard.
So you can solve the problem of airtime at the expense of ultimate functionality. The remedy is about reducing SF by getting the devices closer to a gateway - which is most likely done by adding gateway(s).
Or reduce the uplink interval. Any reason why that can’t be done?
Far be it that anyone other than you, me, @andonoc and a select few keep to FUP so not entirely surprising.
Care to share?
Don’t recall this being called out before - may have missed. A nice to have (even a must have?) for TTN connected LMIC based designs, if easily integrated into general firmware without breaking changes? Which library version/implementation had you developed/tested with?
The FUP control came about as a result of dealing with the duty cycle issue in the LMIC low power library;
Once the duty cycle problem had a work around it was not difficult to estimate the air time at each transmission using the current SF and payload size and adjust the sleep interval accordingly.
Whilst not difficult to do, you had to want to implement the code, and most people are not really interested in keeping to the FUP.
The FUP was not implemented until, by default, 5 transmissions after join.
But both scenarios require the same action. Using SF12 is not sustainable and missing packets are not sustainable, the remedy is always adding another gateway and that needs time.So i think the “miss some packets” scenario is better then causing trouble by exceeding the FUP because it needs no urgent action.
It’s always hard to agree with a strategy without any context, which is why I asked what the downside of uplinking less frequently would be.
But there still seems to be a conceptual problem that you’ve sort of acknowledged. If a device has ended up on SF12 as a result of ADR and you force it to SF10, it seem quite likely that you will experience more than the occasional missed packet - quite likely all of them in fact.
@LoRaTracker is the man to know what the reduction in range is likely to be.
Implementing a change to LMIC and testing seems likely to take as long as buying a TTIG and finding somewhere to host it.
How is your own gateway situated? What antenna does it have? Can it have its antenna in a better position?
Understanding that you may well stop your device stone dead, the simple answer is to put LMIC_setDrTxpow(DR_SF10,14);
just before LMIC_setTxData2(1, payload, sizeof(payload), 0);
to hot wire the SF. AFAIK the power value isn’t implemented but is required.
With LMIC, the changes to cope with FUP, are hardly difficult.
Why ?
Also @andonoc SF isnt ‘just’ about range - it impacts sensitivity which can be a factor in range, but also penetration and another factor often overlooked wrt LoRa modulation capability is the impact of steping SF on noise immunity. IIRC e.g. where legacy modulations need +ve SNR’s LoRa can operate below the local noise floor with a capability that varies with the chosen SF. ADR can therefore adjust SF not only to reflect a suitable RSSI + netweork determined safety headroom, it can also adjust to reflect the presence of impinging noise and poor noise floor. Again IIRC where SF7 is good down to approx -7 SNR, the improved sensitivity of higher SF’s (longer signals allowing more time to disciminate and corellate target symbols) means SF12 is good down to approx -20 SNR. So even if there is a good RSSI and some headroom, if the local RF environment is troublesome a higher SF may still be recommended to compensate. Force to SF10 where system is flagging SF11 or SF12 may mean you have another variable that will kill reception…best advice is therefore adjust timing to suite higher SF under FUP Fewer packets but much greater chance of getting through, and depending on how application constructed and controlled less need for recovery or retries (helping to also limit spectrum use by minimising on air time overall).
The data sheet sensitivity difference for a LoRa device is that SF12 has 5dBm more sensitivity than SF10. 6dBm extra sensitivity would be double the range.
In the code I published for the LMIC low power library, which worked around the duty cycle problem and allowed the node, a SEEED XIAO SAMD21 in this case to go into a very low current deep sleep (5uA), the FUP adjustment was tied up with the deep sleep code.
However in a normal LMIC setup there is a line like;
const unsigned TX_INTERVAL = 30;
Now you might question why the code examples for the libraries are such a major breach of FUP, but if you make TX_INTERVAL a global variable, you can adjust it at every transmission to take account of the SF in use (LMIC.dndr) and the payload length in use (sizeof(payload)).
An exact calc of air time is not so simple, since the air time could be the same for a 20 byte packet as it is for a 22 byte packet due to the vagries of how LoRa works. However, if you take the length of the minimum packet and the maximum packet you can have an average air time per payload byte and with these constants its not difficult to estimate air time, then its easy enough to adjust the TX_INTERVAL to keep close to the FUP.
A Temperature sensor sending every 10 minutes 2 bytes is limited to SF9 with FUP. So, from what i understand, i should NOT allow the device to go to SF10. So if i lose complete connectivity i know to add another gateway or reposition the node. What is strange on my approach? How did you handle those situations?
If the node is in a location where it can end up using SF12, set the send interval to 60 minutes or so.
Do you really need to know temp that often? What is your application?
spring frost detection for orchards
I found the ADR margin setting in the device settings! I go to play around with these numbers
8 minutes. But many temperatures don’t change that fast or need updating that fast …
OK, so send the temperature every hour and more frequently as it changes as it gets closer to the frost level and only when there is a significant change (relative to the accuracy & precision of the temperature sensor). FUP is over 24 hours, so more frequent uplinks as there are significant changes in the wrong direction, can be accommodated.
Like many threads, you’ve got access to some people that have solved many problems like this so many times we confuse them with hot dinners, but getting enough information to advise is like pulling teeth. Be more open to both providing information without us asking or feeling that the proposed solution won’t work.
I send ‘normal’ weather temperatures as one byte with 1 decimal place - which, given the accuracy of the average sensor, is more than sufficient. I can alter the range using the port numbers. But TBH, providing overlapping coverage for anything that’s being monitored that can cost money is a no-brainer.
So I’d ask what the temperature sensor is, hoping against hope that you just tell us what it is on your next post.
And if this is a commercial orchard, or one you get a reasonable crop from, installing a gateway on cellular is a next-day-delivery away - you can then figure out a longer term plan but you will have some breathing room.
If your install is for commercial use, shhhh, I didn’t tell you this, but you could use your one gateway and your temperature sensor off FUP by registering for a Discovery instance on TTI.