Need Best Method to receive 1000 node data on gateway

What is the topology like…will you have good LOS for all nodes/GW’s?

This is exactly the sort of situation where you need to both make sure that your nodes are properly maintaining all LoRAWAN state across power loss, and to modify their firmware with a random delay before the first transmission after it is restored.

However, have you considered battery power? Most LoRaWAN nodes are entirely battery powered, yours would only need a battery that could retain state and make infrequent monitoring transmissions for 22 hours and then be recharged in 2 - compared to most LoRaWAN nodes that must run for months to years on battery.

Though having the nodes operating may not help much if the gateway is not - battery / solar gateways are harder but not impossible, especially with a couple of hours of mains power to top things off each day. Gateways that appear and disappear regularly should really only be deployed on private networks though, as they deeply confuse the TTN servers’ attempt to apply ADR to other people’s nodes.

1 Like

What is you end goal?

Meter reading?

Switching the power on / off?

Verifying if the power were turned on for 2 hours?

1 Like

If you use a pseudo random delay as opposed to a random delay it is easier to debug other issues. You can achieve this randomization pretty easily. 1000 nodes means you need 10 bits (1024) discrete points. if you are using a CPU, such as a SAMD21, you can use its serial number as part of the key. If it has a LoRa radio which has an EUI then this gives you another source.

I’d build in some updatable parameters so that I can adjust timings as the results come in.

The algorithm would be based partly on which gateway(s) it is in range of and its typical SF that it uses.

But most of all I’d do as suggested above, have a backup battery in place that can run devices for a decent number of weeks, charge from the the mains when it is on and because it is charging, it knows it is time to send information.

This would have the added bonus of being able to collect data all the time and aggregate it so you can get a spread of readings from outside normal hours.

@dovov, if it can go 22 hours off, what needs sending every minute for these two hours?

And what is the format of the 37 bytes, can it optimised?

all as mentioned have some control over billing paid using power on/off by relay and contactor.
sending logs of device each minute for monitoring critical parameter of system, system also have some sensors, actuators, etc.

battery backup can’t be done as device require nearly 800mA to operate. which also add cost too.
for timekeeping it have rtc, so battery backup not needed.

32 byte is most I can optimize, further seems not to be possible.

So, just to be clear, they can do what they like for 22 hours but for those two hours you need to hammer the airwaves. I don’t think that will fit without a significant amount of dropped uplinks but I don’t have time to run the numbers through the advanced stochastic guess-o-matic to be sure.

As a total guess, at an average of 300ms per uplink, you’ve got room for around 1,600 devices that can be heard by a gateway if they are spread totally uniformly in time & channel, which they won’t. If devices are in range of more than one gateway, it goes down hill from there. Any devices at the edge of range will obliterate a huge amount of available airtime.

When it’s asleep???

It is standard policy on this forum to to under-share, well done on sticking with policy.

But if you told us what the actual information was in the bytes, maybe someone here could make suggestions about making the payload more compact without losing data which increases the likelihood of more uplinks getting through.

Sort exactly the same as getting $1,000’s of free consultancy. No, wait. Actually exactly the same. :wink:

This is a state you can set on the node, only needs to be sent once (downlink) account is paid or unpaid.

You can set the parameters also with a singe message (downlink), minimum and maximum, and then the node monitors the parameter and only reports once out of range. (You can do the same monitoring with less messages.)

You can also set limit like for excessive usage with a message (downlink) and then the node monitors it.

This will all contribute to less messages per hour and now you can drastically drop from 800mA consumption.

Just thinking out loud.

Problem there is that whilst you might stay within the fair access limit if the nodes can communicate at SF7 or SF8, if any nodes are far enough away to failover to SF12, then they would significantly exceed the fair access limit.

The problem is that for those two hours the airwaves are monopolized by a single use case. Any other use will a) disrupt this one and b) be almost impossible because of collisions.
Having 1000 nodes send updates each minute is completely anti-social.
What if anyone wants to use LoRaWAN for any other purpose for instance soil condition monitoring. Should they forgo their data during those two hours this application is active??

We still don’t have enough information to advise properly or, as it would appear from some of the answers, the design has already been set, at least in the head of the OP, so appears non-negotiable and we are being called upon to fix the cause and not the symptoms.

If the devices normally are turned off, they don’t need 800mA whilst asleep, just enough to keep the RAM going.

The payload format isn’t known, nor is the reason for the the sudden move from zero to an uplink every minute - if it’s to send logs of usage, where did the power come for all those logs - or is it logging every minute of usage whilst the power is on and then the question is, what on earth is being used that needs 1 minute logging resolution.

It hints at both an interesting use case as well as being socially useful. Hopefully some actual meat will be put on the bone soon …