Dragino lsn50-v2 spreading factor

Hello,
i play around with the two Dragino devices, but i did not understand how the device handles the spreading factor? So i have two Gateways on different locations, when one Gateway is down, only the Device with LOS have a uplink . The second device not, but sometimes with SF7. In my opinion the Device must set a higher spreading factor ore not?

when i am right i use OTAA, so i register the device with credentials of the Dragino device…

It has no way of knowing that it’s transmissions aren’t being heard - after a while the link check will cause an uplink to be confirmed at which point it will then start changing settings to try to re-establish contact.

A common regional setting is after 64 uplinks a link check is performed - so it may happen straight away or it may take up to 64 more uplinks before it starts.

Okay thanks,
so when my Gateway is down and i use a 20min Intervall i have to wait… 20min x 64 = 21,3h before the Dragino change the settings?

https://wiki.dragino.com/index.php?title=End_Device_Downlink_Command#Adaptive_Data_Rate

so what mean this setup?

If ADR is enabled on the node then I believe that TTN uses approx 20-22 messages to average out signal quality and determine the optimum setting to send to the node. There may be a fast update sooner if signal level is very good, but I have seen no specific documentaion or forum messages on that (havent really searched for it mind you!) - only anecdotal stories.

mmm okay.
my idea was that i use the second gateway for backup. So when my 1. gateway is down, i have a spare one. But this gateway is 2km away, and one of this nodes have to switch the spreading factor :/… und not in days ;), in 20min. so what i can do is set a fix data rate, but this is not the best way in case of the fair use policy in my opinion…

image

Yes, that’s the LoRaWAN specification

But not when the node is out of range of gateways because there are no uplinks to process.

Rather than trying to make the device do magic, I’d concentrate on why the gateway goes off line. Particularly as it’s on TTN which means the same problem will occur for other users. No gateway is better than a flakey one.

One other alternative is to change the firmware on the device to act sooner, but that’s a rubbish plan as you will end up with a whole pile of nodes requesting a confirmation every few hours because a gateway isn’t behaving.

I am the only one in the range of 20km that have gateways, in the last 4 years we have 5 power failures on the side of the electricity network operator (hole city & Internet over cable / LTE). In this case i want to transfer the Data over another gateway that use a separate electricity network with less than one failure in the last years ;). but this is ~2km far away. it is not in my hand that the gateway is flakey :wink:

We don’t work on this basis at a community level - we have to assume that someone may be running a device - or choose to.

It is, don’t plug it in to something that has power cuts or use a UPS, add solar or something.

All that said, you’re averaging a power cut every 10 months - maybe you could live with the other node being down for a short while until it recovers connectivity. As is symptomatic of these threads, you’ve not mentioned how long the power is off. Hopefully your gateway comes back on line when power is restored.

and what is with the internet connection? i can tell you the LTE side & cable side is down to ;)… and i can not show in the future, the last failure 3h, the failure before 2 days.

or i use a node with fixed SF that works when i am right, and the airtime is in the range of the fair use policy? The main question was how the Dragino handles the SF, i think i have the answer :wink:

That box is basically conducting a denial of service attack on TTN.

You can make your node use a fixed higher SF (though not per specs the highest) but that only addresses the issue for your node.

It leaves anyone else’s node that uses the recommended practice of having ADR enabled to adapt to the presence of the near gateway “broken” the minute that gateway vanishes.

TTN can’t work with unreliable gateways.

That is a rather controversial and extreme way of characterising it’s behaviour. Not all community members live in nice stable locations wrt power around the globe. Indeed outside of 1st world well regulated power environments some level of interruption is the norm, with many having to deal with rolling blackouts/brown outs. Yes best practice may be to facilitate a UPS or solar back up to help mitigate but short of deploying your own telecoms network to back up the backhaul path you will always be at the mercy of someone else’s service provision.

There are some of my sites in UK & EMEA that struggle to meet that stat! And no it’s not always economic to provision mitigation measures, we just have to suck it up :wink:

@the_muck it may be more economic to deploy redundant gw’s with alternate backhaul service offerings than try to mitigate…giving over lapping double or treble coverage redundancy…

Explaining why the problem exists doesn’t make the problem go away.

TTN isn’t designed in a way that’s able to accommodate unreliable gateways, therefore unreliable gateways are a problem.

No, no they don’t, but as a community we should encourage people to setup resources as best they can - not appear to shrug their shoulders & leave a system that is detrimental to the network.

DIdnt say it would…

Indeed, but that doesnt mean we should berate peple if their installs are imperfect and not bullet proof and fully mitigated. Its a community network for goodness sake, and one that has grown up and expanded on a reasonable efforts and ‘contribute where you’ can basis over the years. If you want a fully redundant ‘professional’ network there are alternates. I would bet >>90% of existing gateways are deployed on simple network connections and with standard available local power solutions. Few will have triple redundant backhauls on 99.9999 reliability SLA’s or on power solutions that ‘never’ go out… most folk these days are deploying $70-250 GW’s on consumer level networks and power grids with only a few of us going the extra mile for protection or arranging alternate off grid operation (even then only for a few sites). In that situation folk are not going to commit another $350-1k for redundancy and back-ups…just a fact of life…but yes, encourage and suggest best practices but recognise most will never achieve in such a community deployment. We have come along way from the 1st 50 to 200 UK GW’s where, when I monitored over a full month, >50% went offline for at least one day! In many ways better to have 500 GW’s in a small area where they are up 99% of time giving treble or even quadruple redundancy such that any one GW going offline or a percentage being taken out by comms network outage or local power grid issues than have 5 perfectly installed, fully redundant sites offering poorer coverage and still on a single point of failure!

1 Like

It being a community network and not a private network is exactly why flaky gateways are a problem for the community and not just their owners.

A flaky gateway being around causes problems even when there’s also a good one.

As a community we should help people, not chase them away because they live somewhere where utilities are unreliable. Helping them means providing suggestions on how to improve the situation within limits of what is viable. Not demand outrageous efforts.

Living in countries where utilities are extremely reliable makes us spoiled and demanding. May-be we should look at this as a challenge to try and convince the Lora Alliance to take these use cases into consideration. I know in South Africa people are starting to look at other technologies because LoRaWAN can’t handle gateways that aren’t connected continuously. Do we really want to chase those users away or adapt the protocol to allow for this to happen?

It’s not the LoRa alliance that would need to do so, but rather TTN, would need to implement some sort of mechanism for blacklisting gateways from ADR and ceasing to route downlinks through those that have a poor record of returning a positive ACK. The infamous single channel problem could be substantially solved at the same time.

Understood - but you are missing the point here - dont let perfection get in the way of the good or even adequate, please!

We have a 1st world problem,one that comes from having access to great power, good, stable comms, and money to spend…not everyone on TTN is so priviledged. Lets build the network, and then work on weeding out potential problem GW’s (classic example being how SC/DCPF’s are being identified and removed over time) as we see them…if there are resources available to spend out…

What you are missing that the design of LoRaWAN in the form implemented by TTN has truly horrific response to a vanished or misoperating gateway - with recommended settings it takes hours to days to recover from the loss of what was the closest gateway, even when there are others in range that have remained operating the entire time.

The current design can’t make use of contributions from gateways that are merely “good” - it’s designed in such a way that it demands gateway which are effectively “perfect” in that they have only momentary outages where an occasional packet might not get through, but the next one or the one after that will.

Nothing in the LoRaWAN spec requires that a server place absolute or equal reliance on the inputs of all gateways or have absolute or equal confidence in their ability to send downlinks on request.

1 Like

Not quite sure how we ended up here - I see no chasing away of anyone and I’m happy to explore solutions and if I get a chance my caching gateway mods may get tested beyond a hacky PoC.

I did challenge the “it is not in my hand that the gateway is flakey” because that is just rolling over before anyone had got started. And it came with suggestions & request for more information to see what could be achieved.

I’m not so keen on the hit & run nature of issues that can’t provide an instant fix resulting in people giving up before they have tried if there are things to be explored and, as with many of these threads, a drip feed of data that eventually provides much better context which totally alters the perspective of previous replies. Particularly when the issue has a real impact on the quality of the community network. Whilst many solutions may not work in this instance, by exploring them fully they may assist someone else in a similar situation at a later date.

Or we can just give up each time it gets hard or inconvenient.

Perhaps to move things along, we now know that the outages are in fact quite rare. What we don’t know is why a days wait for a device to adjust is too long - what is the data that is being collected - if there is some merit in the data (ie, not 20 minute check-ins on the water temperature of the jacuzzi) then investigating, proposing, implementing & testing the LSN50 firmware is very very feasible.