Perhaps some more detail of the question ?
How would 1000 nodes be connecting to a gateway ‘at the same time’ ?
Perhaps some more detail of the question ?
How would 1000 nodes be connecting to a gateway ‘at the same time’ ?
I mean to say " I have 1000 end nodes and single Gateway ", I’ll try to send data to gateway every 1 hour. what are the chances that all my 1000 end nodes get successful transmission to Gateway ?
I am going to set data transmission almost same time.
0%
You will suffer severe loss if you set all nodes to send at the same time.
You must use some algorithm to stagger transmissions over a period of time.
BTW, a node does not connect or send to a gateway, it just transmits and if you are lucky one or more gateways receive the data.
Also, don’t forget that you’re using a shared radio spectrum. So, usage of the very same radio frequencies by others will affect your success too. Gateways that are deployed by you, will also receive LoRa traffic transmitted by others. (That’s why a single, large, public network is to be preferred over many private networks. But that’s a different topic.)
say example " I am having 8 end nodes and single gateway " and I am Transmitting almost same time and my time slot for transmisson is 1 hour , what are chances of successful transmission, if not how can I transmit it. and also for 1000 nodes.
Please stop asking the same question with different numbers. 8 nodes, 1,000 nodes, 1 hour, 6 hours, your question has been answered: you simply must make sure the devices do not transmit simultaneously. That’s all we can say. See also How many gateways are needed to support 1,000 nodes sending 10 bytes per 5 minutes? And for those calculations too: take into account that you’re not the only one using the shared radio spectrum, now or in the future.
When you send almost at the same time your interval of all nodes will disperse. In the end any transmission of your 1000 nodes will be practically random and there will be no correlation between any transmission. As a result of this, you will practically not suffer from packet loss due to your transmission scheme or strategy. To even further minimize the effect of correlation you can add little randomness to your hourly scheme.
Are you implying that the devices’ clocks are running slow or fast? Assuming that the clocks are okay, and 1,000 devices transmit on the hour (say one o’clock, two o’clock, and so on), then I’d say they will keep transmitting simultaneously forever?
Of course, when transmitting at a fixed interval (which might start at the time of installation), all devices will not transmit simultaneously.
(@vinayakmp, even when measuring on the hour or at other fixed times, a device does not necessarily need to transmit that measurement right away. Like when used for billing, a device has almost a full hour to get its reading to the server and still be in time to take the next hourly measurement. For measurements every 6 hours things are even better of course. And, as this is radio, there’s never a guarantee that you will get all measurements.)
Yes I do. The inaccuracy of average clocks and clock implementations ensures drift in practice.
When you equip all 1000 nodes with atomic clocks you will suffer for a long time of packet loss because they will send at the same time.
Btw did you consider the effect of difference in rssi for all nodes? I presume all 1000 nodes are received with the same (exact) rssi?
Hey folks,
dealing with implementing a duty cycle limit on Network-Server side, there popped up some questions marks. I’m mainly referencing European regulations, as I’m dealing with EU868 band.
I often read somethin like: When a device transmitted xxx milliseconds, then the channel is not available for another xxx milliseconds. I wonder where this regulation can be found? Neither from the ETSI standard nor from LoRaWAN specs I can understand this interpretation.
Wouldn’t sending 20 packets of 1560 ms airtime (SF11@125kHz) every 2 seconds (thus <0.5s between each transmission) followed by silence within 1hour be still following duty cycle limit of 1% ?
This point makes a huge difference in the implementation. Thanks in advance for hints.
In general the precise detail you are after is in the regulations of your particular country.
For the UK if I want to know the duty cycle allowed at a particular frequency I would check the IR-2030 document produce by Ofcom, the UK regulator. Those regulations do in turn reference the particular European standards.
I am not aware of a specific German regulation so I consider the ETSI EN 300 220 (1/2) as the “bible”. But again, it states:
The Duty Cycle at the operating frequency shall not be greater than values in annex B or any NRI for the chosen operational frequency band(s).
And this annex just contains percentage values. I cannot find something about dwell times or “minimum silence” between two transmissions.
But maybe I’m just missing out something, that’s why I’m asking experts here.
Legal considerations aside, LoRaWAN compliance means the Network Server should attempt to limit transmissions at SF11 or 12. The spec specifically states that devices should not be setup to send at these SF’s as a normal working mode.
And 20 uplinks of 1.5 seconds is a gift-wrapped opportunity for some other uplinks to come along and spoil your day and result in reasonably frequent packet loss.
What is it you have planned that needs 1020 bytes uploading all at once?
Can you not have more gateways so they can be closer to the device?
This is exactly the sort of thing that no member state of the EU gets to legislate for their own country. So there won’t be a German regulation.
if it can’t be found in the standard, what conclusion can you draw from that.
But overall, if you want legal advice that you can hold you head up in a court, this is not the place for it. We are not lawyers!
Thanks for your responses. And you are right, I’m not seeking legal advice.
My “problem” stems from the fact, that the Chirpstack network server does not take DutyCycle Limitations in account (and the project I’m part of is just bound to Chirpstack stack at the moment). And my question is not about certain amounts of data in a certain time, but rather how to implement some kind of “DC-planner” inside the chirpstack, so sent frames aren’t dropped by gateways due to DC limitations (which seems to be a major reason for performance problems in LoRaWAN networks, according to Abdelfadeel et al., 2020).
And for implementing DC limit “the right way”, I’m just trying to understand it fully and as software developer I’m kind of used to lookup specs, standards and regulations.
Is that a TTN server ?
Maybe this paper can help you to understand the Duty Cycle:
Great, many thanks, Wolfgang.
The authors seem to conclude similiar things:
The duty cycle does not have any restric-tions how the transmissions should be spread out in time.It makes no distinction if transmission times are evenlyspaced out or if the transmission time is used up at thebeginning of the observation period and the rest of theinterval waited out. The only thing that must be respectedis the maximum duty cycle ratio itself. As such, devicesare allowed to transmit using bursty traffic, e.g., transmit-ting 36 s and then waiting for 3564 s for a duty cycle of1%.
I think I got a picture of how to implement it. Thanks all and have a nice evening!
Points arising:
Duty Cycle is a device problem, not a gateway or network server problem - they process what they receive - the only thing that a network server can do is ask to adjust the data rate, it can’t stop a device from transmitting.
The academics love to write papers.
Your 20 uplinks of 51 bytes in short order needs a total rethink - unless you are writing an academic paper - if so, I rest my case on point 2, as no one in reality would do this.
This forum is for LoRaWAN on TTN discussions only. Chirpstack is off topic. But as a general discussion it’s OK if we aren’t dragged in to implementation details specific to a non-TTN setup.