Dragino lsn50-v2 spreading factor

I want SF12 not SF7…

Where should I start to try and explain the dilemma or situation.

We place 2 or 3 gateway in a like @the_muck in area to have redundancy for our nodes. Hoping that if we have loadshedding we are going to catch our node TX every 15-20 min.

Loadshedding is divided up into areas, could be suburban, or town or entire rural area.

Now the issue.

  1. Even thought I can put my gateway on a UPS, the ISP have only back up power for a limited time. So now I add a GSM back up to the gateway, the GMS provider also have only limited back up power, or the batteries are stolen. So hopefully this will fall over to another GSM site

  2. Loadshedding can happen once a day for 2-2.5 hours, or even more hours per day. We are currently on stage 4 so it is 2-3 times per day.

IMG_98881
IMG_98871

  1. After power returns the batteries need to have time to recharge. Most systems take +4 hours to be fully charged again.

  2. Your fail over GSM site is up, but it’s up node is on loadshed.

And so the complexity of were and how you get your gateway to be reliable continues.

You only hope is that your 2nd gateway is not affected and that your node very soon realizes that there is a 2nd gateway capable of carrying this small packet to the application server.

Or next option is to hire someone to walk around and making sure the fridge temp you want to monitor is in limits as it’s power is off as well.

Than TTN or the community must define good and perfekt! And there must be guidelines for the gateway operator ;). We talk about 24/7 gateway service, spare parts for gateway & antenna, electric & data backup Infrastruktur + spare parts, like a cold standby situation or hot spare? … feel free! When i am right 365*6 = 2190d, and we talk about ~4d downtime… than we have 99.81% up in 1. world infrastructure… and → https://www.youtube.com/watch?v=JWat4nqI2Ik (Germany 2021) data & electricity infrastructure 100% down up to 4 Weeks :D, feel free to calculate downtime. Starlink backup? Sure a solution for the private gateway operators :wink: to but data in the TTN network for “perfekt” conditions, with solar and wind energy, diesel was limited.

but that is another discussion, i think my gateways are in the downtime range of < 1%. And in my opinion the other private TTN gateways have the same level in europe.

in my opinion that is not important for a solution ;)… when someone wants to check the jacuzzi temp. every 20min, why not? And as a gateway operator i didn’t know what the other users do with the nodes. And as a node operator i didn’t know how stable the gateways or the data link is.

What i can do now when i do both, and need a backup LoRa uplink?

  1. use the ADR mode and but the device / gateway in another location
  2. set the device in a fixed DR mode with the rules of the fair use policy
  3. use another way…

We know the limitations of LoRa & TTN, and that was the start question, or better the limitation of the ADR mode!

So i check with different Servers my gateway status, and the number of gates that received the node signal. With this setup i have a feeling for the future how stable is my setup.

Havent tried myself - its s/w Nick! :wink: - but have seen clients, collaborators and others use a mix and match on the data uplinks. Typically with nodes in close enough proximity and with good enough densification of local GW network nodes monitoring say T&H on a 15 or 20 min cycle and perhaps gathering data/alarming on if a walk in fridge door has been left open too long send normally at SF7 or SF8. Then as a precaution incase a local GW or associated network backhaul goes offline for a short period (hours, maybe a day), and to avoid having to use confirmed messages, they then arrange to send a key stats message at SF10 once every 2 or 3 hours, and then at SF12 as an ‘I’m still alive’ message every 6-8 hrs (so thats 3 or 4x per day), checking that use case doesnt break FUP depending on scenario used. The higher SF’s should be picked up by further away GW’s not affected by GW outage or local power or connectivity issues. The key stats message, as I understand it, includes things like min, max and average + number of ‘alarm’ events since last key stats message. They dont bother with sending e.g. battery level as that goes with the regular updates. That way the user atleast gets ‘some’ useful data to add to normal Tx’s before and after any any data path hickups…

If your going to use SF12, you need to cut the transmissione to about once per hour …

Perhaps reviewing the tips and tricks shared by the inventor of Lora at the 2019 conference might be useful: Extending LoRaWAN's reach - Nicolas Sornin (Semtech) - The Things Conference 2019 - YouTube

It is if it speaks to the motivation of the community. If this orphaned device is monitoring something socially useful / important, then the community will be more inclined to dig in to the code base of both gateway & device to work to ameliorate the problem.


The tone of this discussion appears to have gone somewhat off-piste, somewhat amplified by some of the rebuttals.

I for one am happy to see someone to fix the DR as an interim solution, but as I said above, it is much more community minded to investigate solutions & look at pragmatic solutions. I do not consider it appropriate to bring systems critical solutions to the discussions as I think it is way out of scope and I do not believe I suggest anything like that anywhere.

So, at the risk of sounding like a stuck record, what is this device monitoring as that will provide a context that may motivate someone to review the firmware and speed up the link check.


Dynamic ADR as @Jeff-UK suggests is hugely practical - particularly if the backend schedules a downlink to tell the device that there nearest gateway is offline and to switch to a higher DR for all subsequent uplinks - which will be automagically resolved by the network server to the correct level for the gateways that are available and then, when the original gateway comes back online, back down to the original level.

As for the whole SA power issue, I have a client in the heart of SA so I am fully aware of the power management scheme that’s going on - which is why I’ve been experimenting with store & forward possibilities at the gateway. Plus some on device compression of uplinks so that a fair history can be kept and then be recalled when the network is back.

For ten years I lived in a remote location in England that was subject to frequent power instability - brown-outs long enough to take out a workstation - occasional cuts for a few minutes and over winter the occasional total blackout for hours. Consequently I own ~10 UPS’s and a petrol generator. When the three exit points to the valley were snowed in (every other winter for about 36 hours), I was the ambulance service. So I do know what remote is like.

Feel free to paint my comments as me being the bad guy, in the meanwhile I’ll crack on finding solutions.

LoRa Alliance require network operators to do something about devices using SF11 or 12 as standard - not sure if that means block, disconnect or ignore.

IIRC general requirement is to not allow devices to ‘join’ at SF 11 or 12 - obviously only applicable for OTAA as many ABP devices will just ‘fire and forget’, throwing datagrams over the wall and hoping they get caught (subject to any follow on issues around correct processing of associated MAC command downlinks). But devices can then move to higher SF’s where local (RF/Coverage and associated network determined ‘headroom’ in case of ADR) conditions dictate, e.g. through ADR. Given the timelines that nodes can potentially be deployed for - 5-10years+ it is not realistic to expect a node that starts at SF9 or 10 say to stay at that level as environment changes - new buildings/screenings arise, tree growth, season density changes (wet leaves in summer, heavy snow/ice cover in winter, etc.) - so a good, well behaved, deployment would allow gradual ramping up of power and SF as needed (within legal and FUP limits), but should also then allow for gradual ramping down as those conditions improve. I think very few deployed nodes or applications follow that ethos! :man_shrugging: :wink:

This were actually meant a bit more tongue in cheek. I am oppose to SF12 as much as someone can be, but if you don’t RX for a hour, then you wished it were SF12.

SA in 2020 - 386 hour - ((8766-368)/8766)*100= 95.8%
SA is 2021 as of last week 1200 hours excluding how may hour we are experiencing this week - ((8766-1200)/8766)*100=86.3%

So this excludes other network failures, like we have 10-15 lighting strikes per square kilometer, so the probability of this causing outage is great, equipment failures, ect

But now
:speak_no_evil:

so when i am right, you know exactly what kind of parameters i have to change in the firmware :wink: .

maybe i have to change this value…

What i mean is, why it is important to know the context? Makes it a difference when i want to transfer temperature data for my Jacuzzi or for a blood bank refrigerator?

but thanks for your help and this discussion :slightly_smiling_face:

Because my time is very limited, as is most peoples, we are all volunteers and if you are working on your Jacuzzi monitor, I’m personally far less motivated.

My involvement in TTN as a community is for community benefit, reflecting the other community activities I engage in. If I want to get my Geek On, I build robots & high altitude payloads.

The important thing to understand is that the issue is the very long amount of time it takes ADR to recover from the disappearance of a gateway that had been near a node, and start instead to use a spreading factor that can reach one that’s further away and still operating.

As such, the pattern of outage is much more important than the overall uptime - long outages are fine, reboots and upgrades of a few minutes are fine, it’s more-than-rare outages which last more than a couple packet intervals which are a problem.

A gateway that ran for 3 months, was offline for 3 months, was back for 3 months, etc might be a bit annoying to someone trying to figure out if installing another gateway in the area was worthwhile, but this is the sort of pattern that ADR can deal with, because the need to heal connectivity happens only a few times a year.

In contrast, a gateway that was down for an hour or two every day would be a problematic denial of service attack on the idea of ADR and TTN’s current naively trusting implementation thereof.

There are a number of things that a node owner free to develop their own firmware could do to work around a broken ADR situation, such as blind probing at higher spreading factors even while retaining the nominal ADR spreading factor to return to. But these are non-standard and would require customizing node firmware, possibly firmware that’s already field deployed. I’d guess they’d also cause the node to fail LoRaWAN certification - you’re not required to use ADR, but to certify in ADR mode you should probably be using the server configured data rate or backing off in loss of connectivity, not randomly jumping around. And there’s not really an appropriate way to “mark” these uplink packets, either - if you occasional hop down to SF12 (or SF10 where that’s the limit) to probe, what you’d like is the server to respond to that only if it hasn’t heard anything else from you in a while but to not feel the need to burn dowlink airtime if it has been getting your packets. There’s no bit that means that, only one that demands a reply, so you can’t do something like send one out of 6 packets at a higher SF as a probe when the server could tell you to change SF, but rather only send a link check packet at a rarer duration where demanding a downlink is acceptable.

If I had to work around an unreliable nearby gateway with TTN’s current implementation, what I’d probably do is implement my own ADR-like algorithm at application level (while not being in ADR mode as far as LoRaWAN in concerned), where I’d either manually tune the settings for a given node such that downlinks are sent to it to maintain a spreading factor where it could reach a particular reliable gateway (probably one I maintained) or else I’d automatically calculate a gateway reliability score and use downlinks to command a datarate that reached a gateway of sufficient reliablity. Including the diversity of uplink frequencies reported from a gateway in its reliability score would also be a way to avoid being mislead by single channel abominations that are only present for one frequency and spreading factor combination and missing for all others.

But there are real problems with doing this outside the server. For one, gateway reliability could only be calculated based on packets from nodes in your application, not based on overall traffic flow or better yet the periodic status messages. For another, you remain at mercy of the server’s naive willingness to assume that gateways are equally trustworthy in sending downlink messages on request, rather than routing the occasional packet through a nominal less ideal (in the RSSI) sense gateway to see if that works better… or ceasing to send unacked MAC commands.

The place to implement solutions properly would be in the server. And because such algorithms would be experimental, that really points towards an architecture that makes it more realistically possible to run your own server to operate your nodes in the strategically best fashion. Something more like community at the level of packet broker, with TTN’s server instances as merely the default for people who don’t care to tune further.

However, it’s hard for an individual or limited user to access Packet Broker, and there may be practical problems with the size of the “routing table” that would be needed to let everyone do so at will. It may be practically simpler to just point one’s own gateways at both TTN and one’s own servers - which removes both the problem and benefit of getting traffic for ones own nodes through other’s gateways. Doing so leaves the problem of servers not being able to plan around downlink collisions. But ironically, packet broker (or at least TTN’s interaction with it) doesn’t solve that problem either - there’s no looking around for a gateway that’s actually free to send the requested downlink, and no shared accounting of donwlink airtime availability, just assignment based on the server’s private knowledge alone, and subsequent success or failure, rather than trying another if the first reports a NACK.