I know the limitation of channel usage among multiple nodes.
so, there are many contention or division mechanisms (csma, tdma, fdma, ofdma). in most protocols, there is no duty cycle limitation itself, as far as i know. duty cycle they have is just an outcome of upper MAC.
However, in terms of PHY regardless of MAC(LoRaWAN), LoRa itself seems to have such silent time. that’s my question. where that silent time comes from?
LoRaWAN or LoRa PHY itself? depending on the truth, there is a chance to propose such new MAC (not LoRaWAN…)
The frequencies used by LoRa require (at least in EU due to government regulations) the quiet time. You can propose a new MAC (there are others around) but it still requires silent time as the regulations are the same for all uses of these frequencies.
Duty cycle is dependent on the (inter)national regulation of the frequency band you use. It is also dependent on the range, e.r.p. and application. E.g. a short range 2.5GHz application must not observe any duty cycle, while a long range application might have to comply with 15% duty cycle. The frequency bands and application type LoRaWAN uses have a 1% duty cycle.
Well, at least in Europe. A quick Google search showed that US FCC regulation seem to be much less strict, with practically no duty cycle limits in any ISM band. Here you could use the Listen Before Talk access scheme, which is also supported by LoRaWAN. However, TTN still enforces a 0.1% fair access duty cycle, afaik even when using LBT.
well, when i check the KR region, “silent time” is not fundamentally needed.
in KR region, when LBT is successful, nodes just could transmit messages. it means there isn’t transmit interval time (silent time), and I think LoRa Phy itself doesn’t need such time to send messages continuously. such requirements are just from government or regulations.
So, in country like Korea, a node can send a bunch of messages if other nodes are silent.
Apparently LBT requires an additional FPGA not available in most gateways (although I think the Kerlink has it, maybe @kersing knows?), and only the RN2903 with the latest (beta?) firmware supports it as a node.
Mind you that the TTN network server (or any other LoRaWAN NS) might still enforce a fair use duty cycle (0.1% in TTN).
Ok, I believe that silent times and duty cycle issues are now well understood by everyone. So, can you please help me discussing about the fragmentation?
Since the maximum payload may vary a lot depending on the data rate, even having an application based on short messages (lets say 60 bytes), may not be enough to fit in one transmition. And actual smaller MaxPayload can be as low as 11 bytes!
So, if you dont deal with fragmentation at some point you can’t take advantage of the possible 250bytes long maximum payload that may be achieved in some data rates, and maybe will have to set a very restritive limitation on the payload size.
This way, I believe that a standardization regarding the fragmentation would be very important to encourage the better using of the channel.
This is a common design dilemma. If you want a node that works under all circumstances, you will need to trim down the payload so you don’t violate the duty cycle even under the worst RF conditions. Do you want a product that can send a large payload then you will have to make sure the RF conditions remain good (e.g. better antenna), or do some on the fly optimalisations.
You can design a vehicle to go very fast or lift heavy stuff, but not both .
Hahaha, nice idea comparing this tradeoff with the strength x speed capacity, @Epyon! Maybe you got my point here.
But my dilemma has both a practical and a philosophical question: I know that LoRa/LoRaWAN can handle my communication issue. I mean, I don’t need high data rate nor high volumes of data. But, I may need sometimes to send “big” data frames, let’s say of 100 - 200 bytes long.
One approach could be to limit my network architecture in order to have a link budget (dealing with distance and antenna gains) big enough to make sure that I can use a data rate that support this payload size.
The other would be to limit the payload (at my application) to the lower maximum payload possible (11 bytes on the AU915 channel plan).
So, in my oppinion, none of them are good options. At the 1st one we can’t take advantage of the great ability of LoRa about dealing with long distances, minimizing the quantity of gateways needed to cover an area. On the other hand, the 2nd approach leads to a subutilization and a very bad payload/overhead ratio wich obviously is a bad usage of the RF channel.
This leads to my current belief that the only effective approach being the packet fragmentation / defragmentation, so one can send bigger packages in any data rate / payload limitation, taking advantage of all LoRa capacities. If this is correct, I believe that LoRaWAN should provide a fragmentation standard so we don’t finish up with a lot of proprietary and uncompatible solutions. Also, having this standardized will lead to manufacturers and the developper comunity to integrate this on the stack, encouraging everyone to use it and therefore helping to achieve a better overall RF channel usage.
I’m not quite sure what you mean by ‘fragmentation’, but in any case the problem you describe is to be solved at the application layer, not at the physical to network layer where LoRa(WAN) is defined. The LoRa(WAN) spec can support the needs you require, but the specific problems your application creates (sometimes larger payloads) should be addressed by the application itself.
No, not by increasing the payload itself, but improving the payload/overhead ratio.
If you need to transmit 100 bytes payload, it’s a better approach (for the RF channel usage) to send it in one packet of 113bytes than to send it on 10 packets of 23 bytes each, 230bytes total!
It’s obvious much better to use the channel to transmit the app’s data than overhead!
By fragmentation I mean the process of breaking a big payload in smaller packets that can fit the protocol’s payload size (MTU).
Yes, you are correct. By now, over LoRaWAN, it has to be solved at the app layer. But in other protocols (like TCP) it is solved at the network layer. I didn’t tested yet, but LinkLabs claims that their Symphony Link (alternative to LoRaWAN, based on the LoRa PHY layer) also solves the fragmentation at the network layer.
I am implementing this at my app, but I would be much more comfortable to rely on a network layer fragmentation solution. And, as I told before, I belive it would lead to a better use of the RF channel and standardized solutions.
Sure it’s not for big payloads…
But 100bytes isn’t big, is it? I don’t think so, based on the fact that some LoRaWAN DR can support 243 bytes payloads.
So, imagine an application that may need to evantually send a (100 - 150) bytes message. And can take many advantages of LoRaWAN network (long range, low cost, low power). Wiithout fragmentation on the network layer it will have to implement a quite complex algorithm at it’s app layer, or make a bad usage of the RF channel, flooding it with almost 2x the necessary bytes to be transmited.
Actually it isn’t a ordinary sensor.
There are some equipments and they use a legacy communication protocol based on some ‘big’ messages.
I am developping a remote connectivity solution to them, but it’s mandatory to keep using this legacy protocol, end-to-end. The LoRa/LoRaWAN data rate and traffic limitations can be accepted and we’re deploying the whole network infrastructure (gateways, network servers…).
Ok, so let’s consider that 100bytes is indeed a big payload. But 11bytes is really a tiny payload, right?
This way, what would be a good solution to send, let’s say some (25 - 50)bytes ordinary sensor messages, considering that the lowest maximum payload can be as low as 11 bytes?
Having the LoRaWAN stack provide it would solve your issue, however it would encourage people to send larger packets as the network stack takes care of it anyway.
Also, keep in mind EU868 at SF12 does not allow another transmission for minutes (time on air restrictions) when a large packet has been sent. So the LoRaWAN stack would have to stay awake for minutes to handle the next transmission wasting battery power all that time.
also, it will be very complicated because you must be 100 % sure that the second fragment reached the backend, to accomplish that it must use ACK or a form of CrC where the node must resend everything when not 100% received.
very fast you’ll hit duty cycle restrictions/fair use policy
Yes, I’m aware of it.
AU915, which is also used here in Brazil, also has some time on air restrictions. This particular application use case won’t have battery power limitation and probably will be implemented at class C.
Anyway, I just think that fragmentation would play an important role, maximizing the efficiency of RF channel and avoiding poor solution like limiting application payload to the lowest maximum possible.
Unfortunatelly this is the aproach that I have seen being recommended almost everywhere I look for.