About my question… I already know part of the answer…
If gateway area is overlapping the SF12 regions will be minimized and every node will be in the higher spreadfactors. This solves our problem only partly, however now we can use ADR to our advantage, since it can be used to make sure nodes use te lower spreadfactors as well and to make the collisions/spreadfactor uniformly distributed.
What would be the number of nodes extrapolating 500 devices sending within one minute, according to fair air usage ?
I think it would be far from the theoretical 8k-20k nodes supported by a single transceiver gateway. However, this is an interesting point allowing to figure out some big numbers.
One point not taken into account is the air waves usage already present, and the collisions between the various technologies in the same air space (sigfox/lora/remote controls/IMS devices/weather stations…)
Thanks for posting this here, and adding your own thoughts and simulation!
You need to be very careful in comparing these two technologies. For example, at a first glance it seems that the full bandwidth (200kHz) of a Sigfox gateway is compared to a single channel of a LoRa gateway (but correct me if I’m wrong). Using 8 standard channels, and 2 high-speed for LoRa, the numbers get much closer to each other. Maybe the main conclusion from the analysis is that Sigfox is more spectrum efficient. Although I don’t think that assuming 2000 independent 100Hz channels in a 200kHz bandwidth is realistic: at full power adjacent channels will interfere with each other. Another thing to take into account is that if there is a certain difference in signal strength, LoRa can still decode the strongest signal in case of a collision.
My point is not to claim the analysis is wrong, but just to indicate that there are many factors to take into account, also non capacity related ones.
What is important, and clear from the blog post, is that with too many message, a Lora channel will be congested, and result in unacceptable packet loss. This is the basis for our ‘30 seconds’ fair access policy.
We defined a maximum load per frequency of 5% (duty-cycle) to prevent too many collisions, and a minimum number of 1000 nodes we want to support per gateway. Over 8 frequencies that gives you per node: 8×24×60×60×0.05/1000 = approx. 30 seconds. We don’t make any assumption on the SF distribution, and don’t include a gain based on the orthogonality.
They key is that it is defined in seconds, and not in messages. This will keep the number of (unnecessary) SF12 transmissions minimal, as the blog post clearly shows is problematic.
The TTN model is based on small cells using simple equipment, not large gateways covering entire cities (although that’s great for now). Together with the scalability features of LoRa such as ADR and the ability to regulate node TX power, this will get us quite far.
Let’s keep researching and discussing this, as it is key to the long-term success of TTN.
Am I understanding correctly that with a maximum of 1000 (25 byte) messages per minute; this boils down to a maximum of 16 messages per second (per channel)? So a maximum of 400 bytes per second (per channel)?
Hi all,
Thomas, please look at my original blogpost. I focussed on LoraWan not Lora, for LoraWAN there are only 3 data channels in the specifications for Europe. In the simulation of Sigfox I used 300 Hz instead of 100, exactly to take into account channel interference. About the detection of a signal when multiple signals collide event in it own datasheet Semtech states this is only possible when the SF is different.
30 second updates are only possible in SF6 with 25 bytes packets (so only few bytes payload) since you have to adhere to the 1% duty cycle. with SF12 1 25byte message almost takes 1 s, which means first next transition in 1.5 minutes (even in the case of a retry). about the TX power and ADR (ADR means scaling with the SF) but in any case when the network enables SF6 in all cases this is the best. the TX power, I mentioned in the last paragraph of my blog.
My main point, with WAN (in every technology) your application will be impacted by other users (even using other providers) when using unlicensed bands. It does not matter what the rules of the gateway of the provider are. you are not using this network alone. It also means that you are never guaranteed that your communication will work. Don’t get me wrong, I love and encourage what is happening with LWPAN networks such as LoraWAN and Sigfox. But is is also imporant the people are educated well, there are to many misleading commercially driven posts.
@jnuyens, all depends on data rate as well, so it’s not easy to give a number. Spreadsheet for LoRa airtime calculation might help answering your question.
I agree saleability is a big issue if the only co-operation is the 1% regulated air time. One interesting aspect of LoRaWAN is the ability for the Network Server to manage various communication parameters. I think to manage the physical limitations such as this LoRaWAN network servers will need to co-operate in the way they manage their respective devices.
It would be interesting to run some simulations regarding scaling with managed devices (not naïve and random as per this study). As gateway density increases more devices would have the opportunity to shift into higher speed modes and with optimised power modes supplying only sufficient power to reach the closest gateway regional collisions should reduce.
I think part of the attraction of LoRaWAN as an organisation is that is has a chance to specify how the cooperation between network severs could work to optimise the network. It’s a big task in a free bandwidth spectrum but there are measures that could be taken against uncooperative devices eg not forwarding traffic that is not optimised etc.
Hopefully the LoRaWAN certification will mature to include such management recommendations. However it will remain difficult to live with ‘uncooperative’ LoRa devices. ie those who just use the PHY and don’t participate in a collaborative management. I suppose though in the end it’s in everyone’s best interest to participate in the management otherwise all connections become unreliable to the point of unusable. So perhaps mutual self interest will be enough to ensure the bandwidth is sufficiently well managed. Will certainly be interesting to see it evolve.
The best method would be to all use SF6 since the air time will be smallest. Still you can not synchronize your devices to have a time slotted system, since that would require even more communication from the gateways, do not forget they also have to adhere to the 1% duty cycle rule.
Anyway if you have a scheme, I’m happy to simulate it.
Hi Maarten,
To start with your last point, I think you’re right on spot: we are sharing an access medium, and fully dependent on what others do. We should try the best we can do within those constraints, but also make sure everyone is aware of the limitations and potential problems. And debunk some of the marketing that states many 10.000’s of nodes per gateway.
I coud not find the 300 Hz in your blog post, but it seems a reasonable number to me. Do you know what channel spacing Sigfox actually uses?
LoRaWAN defines 3 common channels (in EU) that can be used for joins as well as messages. As gateways support 8 channels, the other 5 frequencies can be chosen by the operator, and communicated to the nodes after a join, or statically configured. If you want to compare gateways, it would be good to take all 8 channels into account, and possibly even the high-speed LoRa (SF7 250kHz) and FSK channels.
There seems to be something in the datasheet regarding LoRa co-channel rejection. Maybe we should do some testing to see how this works out in practice.
SF6 is not used at all within LoRaWAN, so maybe better to leave it out at all.
The ‘30 seconds’ TTN fair access policy actually defines the total time-on-air a node can use per 24h. At SF7 you can send many more messages than at SF12. This also provides an incentive to use the lowest SF possible. But, to reiterate your main point, even if we behave, others might now and spoil the game for everyone.
Thanks for your work and participation in this forum!
In our capacity testing we’ve seen collision rates at about x2 this rate because of up/down collisions caused when the gateway is ACKing or sending MAC traffic. I our testing 1000 25 byte SF7 transactions per minute with 5% ACK/MAC downlink result in 67% message loss using standard 8 channel LoRaWAN. This does get better with more gateways, as the downlink traffic can be shared. SF11-12 Downlink is extremely harmful to the network. This testing was not compliant for 1% gateway duty cycle, as it was for the US market.
This is one of the primary reasons Symphony Link from Link Labs (currently only for 900 MHz) was built. You need a synchronous system to coordinate TDMA/FDMA and (SF)MA, along with MAC ACKs. The 1% duty cycle limit in 868 MHz is the reason that LoRaWAN has to use this Aloha scheme, which results in a high collision rate. 1% also dictates very few ACKs, which is good for capacity, but bad for reliability.
With LoRa, ANY LoRa traffic affects the gateway baseband modems the same way, whether or not it is part of the same network. This is compounded if the public/private PHY header is not being used (which the Alliance is proposing). If TTN, an operator, and several private networks all try to coexist, the interference could be fierce.
In Symphony Link, because of the higher rate frame header transmission (every 2 seconds), nodes can automatically set SF to match the link, and the gateway can actively move uplink channels around based on other interference it is seeing. Symphony Link also compresses all acknowledgments, info blocks, and downlink for the frame into a signal message, which saves on preamble time-on-air over the LoRaWAN unicast architecture.
To @JamesC 's point, assigning some sort of slotting with CSMA to the LoRaWAN uplink scheme could as much as double the capacity. I’m unsure how this could be coordinated though, without more gateway transmissions.
The attached Whitepaper from Real Wireless goes into these and other considerations.
Real-Wireless–LPWA-2016.pdf (1.4 MB)
Some sort of slotting would be needed I think if the communication overhead could be somehow manageable.
In a single gateway scenario which is what you have focused on in the initial simulation I think. In an Otta, Class A scenario maybe it would be as simple as the network server allocating the time slot as the network address while keeping track internally of the time on the air requirements for that device to build up the slot allocations. So if each address is a fixed unit of time it might allocate addresses 1,4,5 where address 1 gets 3 time slots, address 4 gets 1 timeslot, address 5 gets 2 time slots and so on.
.
Then periodically ( 10+ minutes ) the gateway sends out a sycnronisation beacon with a time slot allocated to handle joins.
Beacon
slot1 → address 1
slot2 → address 2
…
Free Slot(s) for Joins
The mote would only need to listen initially for the beacon and determine when to Join. After that it would synch to it’s allocated time slot based on it’s allocated network address so there is no additional slot assignment traffic…
Of course this is ‘best case’ with every node cooperating but already implies some limits such as all devices would only have a send window once every beacon period.
Thanks for the important work on collisions
A gateway should be able to detect collisions, so I assume it can calculate an error rate, though indeed it cannot know who is to blame or what is lost.
here is a capacity comparison between different technologies.
sigfox is better spectrum efficient than lora because modulation technique.
sigfox and lorawan use ALOHA which is very inefficient causing collisions between nodes. Ingenu, waviot and symphony link use TDMA, a better solution.
I disagree on this, because when there is full coverage for SF6 (not LoRaWAN, but nvm, take SF7) it will leave the ortogonal SF12-SF8 channels empty. ADR schemes should be used in this case to drive LoRaWAN to the limit and minimize packet loss. Using SF12-SF8 gives better spectral efficiency.
Good discussion here. I think it points to the fact that over time Long Range may not be most important factor for LoRa as becomes widely deployed. Capacity will move to the top of the list, particularly for the downlink, and so more gateways will be added to give a denser network with shorter range and lower spreading factors.
There was a post above implying this as some kind of failure or compromise, but I don’t think it is. LoRaWAN has the benefit of very flexible and dynamic deployment. Gateways can be dropped in to the middle of a city and can improve capacity / battery life / data rate. At the same time, the same gateways and nodes can serve an IoT connection over 15km to a cow or hedgehog in a field. Thats a very versatile system.
To make a good comparision you have to include the cost of the deployment.
LoRa network can be less long range, but cheaper, so more gateways could be added to get the same PER as Sigfox?
Exactly. Have you seen a SigFox gateway? I don’t know the cost but its a very big (19" rack) expensive looking box so I assume would cost several €k.
By comparison, LoRaWAN gateway cost is dropping rapidly, with TTN now setting the bar at €200.
@nestorayuso do you have more information about the technology used by Waviot, because the table on the website is a commercially targeted table which will only focus on the best case.
e.g. the number of nodes / gateway -> the spectrum is the bottle neck in most cases not the gateway.
About TDMA vs random, TDMA will be more per formant but will need more synchronization overhead and thus listening by the node which will impact the power consumption. Moreover, TDMA only works when you are in control of the network, which is very difficult on an unlicensed spectrum.
On the topic of using more gateways: even with a TX power of 2 dbm and SF7 (sensitivity -123 dbm), according to the Hatta propagation model for cities the range is 860 m (suburban 1.7 km), resulting in a coverage of 3 km². so there will always of course be in impact.
WAVIOT uses ultra narrow band DBPSK modulation at 50bps (sigfox is 100bps). On top of that, uses FEC/coding gain improving sensitivity and reducing datarate to 8bps or 12bps.
Waviot base station receiver BW is 500KHz versus 192KHz in sigfox. More capacity and more processing power needed (use Nvidia Cuda). Also use three sectorial antennas versus an omnidirectional antenna to get more range and capacity.
Of course TDMA syncronization has drawbacks as you said and precludes the possibility to make an uplink only device.
I think TDMA can be an option in UNB solutions but It should be a must in not spectrum friendly solutions like spread spectrum or LoRa.
thx for the info!
But 8bps means 25 byte message -> 25 seconds, but according to ETSI you can only transmit with a maximum of 36sec/hour so that means max 1 message/hour and no potential retransmission, correct?