Hi everyone. I am wondering if a rural private LoRaWAN application with no downlink requirement can maximize its number of end nodes to utilize 100% of the gateway’s duty cycle for uplinks (assuming devices use ABP). Ideally, this would be done with time slotted transmissions with time synced nodes or non-ideally random transmissions (respecting end node duty cycle limitations) with collisions that are acceptable.
So, each node is respecting regulations and fair usage suggestions, but there are 1000s of end nodes occupying the available spectrum almost always.
Short answer - for your own dedicated network - is no
Short answer for practical terms in the sense of fully saturating a GW’s ability to handle uplinks, if you consider that 100%, even if not ‘actually’ 100% of theoretical performance - is yes
Key word here is ‘theoretical’, there have been lots of academc and statistical studies over the years (GIYF), but they have all had limitations and made assumptions - some realistic some with no grounding in reality but making for interesting thought experiments and mathematical treaties. In the early days Semtech had some basic simulation and analysis tools that got you part way there based on possible deployment scenarios - the ideal mix and spread (if you excuse the pun wrt SF!) of configurations was in part then influenced by the use case and how the fleet of nodes might be configured and operate in real deployments. The problem is there is no such thing as an ‘ideal’ or theoretical model of deployment that is achieveable since you have no control over whether or not someone else will also come in and deploy the technology in the same or an overlapping footprint, even a limited overlap at the fringe of your coverage will ultimately perturb away from what is possible ‘in theory’. There are both spectrum and RF capacity limits and also in chip/in system limits that can also come into play as well as issues like a valid preamble starting to be detected, demod resources being scheduled and allocated to handle then the signal fading effectively out of range or being interfered with such that that task cannot be completed and the resource allocation aborted and being rescheduled elsewhere…its a similar issue to the way modern CPU’s work on pipelines, pre-emptive instruction decode/execution and caching etc then finding that op is aborted or cancelled or branches in a statistical manner such that execution inefficiencies creep in and pull you away from 100% theoretical compute throughput.
Just as “thought thought it did” so, as they say, “theories are nice in theory”…but its the practical real world that really matters. If in doubt spec for over capacity, survey ahead of deployment and allow for the fact that others will come and you will need to evolve your network and capacity/density to adapt, and plan for packet loss - RF isnt a guaranteed delivery mechanism (makes a mockery of SLA’s really!), especially in open/ISM bands with many potential types of interferers. Typical guidance is plan for up to (some say atleast!) 10% packet loss, though many commercial networks I know advise look to 25%, even 50% and make sure application is resiliant under such circumstances. Personally over many years I have seen typical long term figures around 1-3% depending on where deployed (not always your ideal isolated rural!), but have noted a slow but inexerable statistical decline over the last decade or so as the technology has become more widely deployed…
Key question is are they all your and just yours or other peoples? Then how will that situation evolve over time. Are you looking at frequent traffic per node or perhaps a small meter reading once per day, per week, or even per month, with no retries and assumed reception but resiliance to missed traffic? Have seen people plan on 100’s/GW, 1000’s/gw, 10’sk/GW and some hoping for 30-60k/GW even looking (in case of 1 utility co) at if 100k+/gw a possibility, but the real networks have never behaved exactly as theoretical model might suggest
Consider what happens if a neighbor (with overlapping RF range, which with Lora modulation might be quite some distance away) does the same. Both use case will fail and result in frustration for the owners/users.
Also, consider you are using a shared resource, radio spectrum. While your setup might be legal because all individual devices work within legal limits, it isn’t socially acceptable to monopolize this scarce resource (imho) and if there are too many occurrences regulators might consider changing the rules. There have been suggestions that further regulation is required in the bands used by LoRaWAN by several researchers already so let’s not prove them right…
One reason folk ask such a question is to see how they might minimise the gw deployment/investment…… only comment then is cheapskates! recommend higher density and redundancy of deployment then also if on TT it helps build coverage and resilience for all….
Thank you for all the replies. I understand all the valid concerns with such theoretical selfish deployment.
In this case, let’s change the question into a more practical one if you do not mind. To what extent would it be acceptable for my application to be selfish on exhausting the gateway? 20%, 30%, 50%? Let’s say I have N nodes and I am willing to install new TTN gateways to decrease these gateways’ time spent for receiving packets from my application below that percentage. This way, I am extending the TTN coverage while providing availability to other possible users. Thank you in advance.
I think the gateway that currently carries the most messages per hour are somewhere in the region of 20-25000 (I have not checked this lately), what application or amount of node in one application will need this? (Drop packet amount is unknown)
@herdem09 If N is anything other than a reasonably small number (thinks 10’s, not 1000’s!), if your application is commercial vs a community project, then you would not be on TTN anyhow - you would go to a TTi provided (hosted or self hosted) instance of TTS; but your problem remains that as a broadcast technology all nodes in range of all GW’s that hear them cause traffc activity on that GW…so you can never get to ‘own’ an inexhaustible capacity. Also as an open any one can use unlicenced band spectrum deployment is the ISM band you still have to play nicely with other users, based not just on regulatory limits but also the practicalitise and social engagemnt of using this scarce resource…what ever capacity you think you are ‘entitled’ to use is irrellevant as they could always be someone who comes into your area and start Txing and occupying the spectrum and, if LoRa modulation/LoRaWAN based, taking y GW resources. And remember where the regs are concerns as I often post on the Forum just 'casue you ‘can’ does not mean you ‘should’. We all have to share the radiowaves, and as Jac pointed out in many cases if peopel abuse the regulators will take as an excuse to jum in and make rules even more restrictive. Therefore your question is Moote!
BTW installing more GW’s doesnt get you more ‘node allocation’…in theory though I have some ~50 in my personal fleet (for community use) and have given away or otherwise enabled probably another 150-200 that others now own or run, I dont get any more nodes than some one in the local community deploying just 1 GW, even though after 6+ years here I have slowly added to deployed node base…no doubt someone will shout “Oi Jeff - enough” at some point!