Why is the LoRa chip rate equal to the bandwidth?

Please note: I acknowledge that this question is related to LoRa as opposed to The Things Network or LoRaWAN specifically, however I have spent a number of hours online researching this and cannot seem to find an answer. I have also asked this question on the Electrical Engineering Stack Exchange, but it is still not clear to me what the reason is. I am asking the question on this forum as I have not come across another “LoRa” specific forum.

I also acknowledge that some may accept that this is part of the LoRa standard and do not see a need to question it, however I consider it an important characteristic which I would like to understand.

Start of Post:
I’m trying to learn about how the LoRa transmission standard works.

I am familiar with the concept of bit, symbol and chip.

I understand that the chip rate (number of frequency jumps per second) is equal to the bandwidth. For example, a bandwidth of 125 kHz means there are 125,000 chips per second.

It is not clear to me why the bandwidth (the range of frequencies that the chips can be) are linked to how many chips there are per unit of time (how quickly the transmission frequency moves across the bandwidth).

Why are they linked? In other words, why is the transmit time of each chip fixed to the bandwidth? Is it convention or is there a scientific/mathematical reason for it? Why can’t the chip rate be 250,000 chips per second for a bandwidth of 125 kHz instead of 250 kHz?

Sources:

Understanding the relationship between LoRa chips, chirps, symbols and bits

https://www.mobilefish.com/download/lora/lora_part15.pdf (slide 3)

https://www.frugalprototype.com/wp-content/uploads/2016/08/an1200.22.pdf (page 10)

Have you checked the source for Lora: Semtech?

May-be https://www.semtech.com/uploads/technology/LoRa/lora-and-lorawan.pdf provides some insight. Right now I don’t have the time to read and absorb it properly.

Each step in SF or BW is 2.5 dBm sensitivity with defined settings by design.

Why would you want to use 250k chips at 125 Khz?
The resulting sensitivity could be similar to using the next SF value?

The specific values for modulation were chosen for orthagonality and sensitivity range based on what data is presented in the Semtech docs.

Hi @kersing,

Thank you for the comment.

I have had a read of the linked document. Unfortunately, it does not call out the chip rate being equal to the bandwidth.

Hi @jreiss,

I understand that sensitivity would be the same, however can you clarify why there is a 1:1 ratio between the chip rate and the bandwidth? In other words, if all the chip rates were double or halved, wouldn’t that be negated? I am trying to understand what the significance of the 1:1 ratio is.

Also, is my understanding correct that orthogonality means that two signals transmitting on the same frequency, but at different spreading factors can be successfully received and processed by a device?

The reason is basically that it is the simplest choice: why choose a different ratio than 1 ? increasing the chip rate without changing the bandwidth gives sharper frequency variation which ends up increasing out-of-band emission and create regulation issue and/or put higher constraint on the PLL bandwidth. And on the other end lowering the chip rate immediately reduce your data-rate and increase sensitivity to frequency drift.
There is also some nice simplification when ratio is 1, where you can easily swap frequency and time offset: having non-unit ratio means adding scaling a bit everywhere in the receiver.

There is a mathematical relationship between chip rate, symbol rate, and bandwidth. This is true of all communications systems and not specific to LoRa. The “hand-wave answer” is related to the Fourier Transform, where each change you make in a time-domain waveform consumes additional bandwidth.

Occupied Bandwidth is equal to chip rate (or symbol rate if there is 1 chip per symbol) * (1 + filter roll-off).

This may be an unsatisfying answer, but you’re broaching a very deep subject that is rooted deeply in math. Suffice to say, the relationship is immutable and defined by physics.

Hi @Clams,

Thank you for your comment. Your point about frequency drift makes sense at higher SF makes sense (symbols are longer, making it more susceptible to frequency drift).

I just want to clarify this:

Increasing the chip rate without changing the bandwidth gives sharper frequency variation which ends up increasing out-of-band emission and create regulation issue and/or put higher constraint on the PLL bandwidth

Out-of-Band Emission
If we continue to follow the 1:1 rule between chip rate and bandwidth and double the bandwidth from 125 kHz to 250 kHz, the chip rate will also double from 125,000 to 250,000 chips per second. I understand that the EU868 supports a 250 kHz bandwidth, so this does occur in the real-world (optional and only on DR6/SF7 though).

The benefit of this is it halves the transmit time per chip whilst doubling the frequency spacing, allowing each chip’s frequency to remain distinguishable at the same resolution (ratio of transmit time to frequency spacing remains the same).

Is it correct that transmitting more chips per unit of time (changing the frequency), regardless of if the bandwidth is increased, mean a greater number of high frequency components, increasing the out of band emissions, leading to more significant side lobes and spectral leakage?

Independent of the time-frequency ratio benefit, it’s not clear to me how this would be different to doubling the chip rate whilst maintaining the bandwidth.

PLL Bandwidth
If PLL bandwidth is a measurement of how many frequency shifts that can be tracked per second, why does it matter if the chip rate is equal to the bandwidth? Does restarting the up-chirp cycle through the bandwidth have any effect on the PLL bandwidth (as long as the frequency changes per second does not exceed the PLL bandwidth)?


Alternatively, it is the whole reason for the 1:1 radio just that 125,000 chips per second is within the range of PLL processors and therefore it’s easier to match the bandwidth? A happy coincidence?

The argument is more about the worst case supported: there is official support for up to 500kHz for sub-Ghz radio, and changing the ratio time/bandwidth would impact this worst case.
And the PLL bandwidth does not need to match the chip-rate: with some digital filtering you can relax the constraint on the PLL to be smaller, but there is of course some limit to how much you can filter the signal to match the PLL bandwidth and increasing the chips per second will limit this filtering even more.
But overall I feel you are overthinking the reason: like I said: having a ratio of 1 is just the most obvious starting point. If you can build a radio cheaply that meet all the requirement with this choice, there is no reason to change it: if you want more data-rate, you just need to double the sampling frequency, and then from a processing point of view nothing has change.
Trying to over-optimize some key modulation parameter when you start to design a new radio physical layer is just asking for trouble in the long run when you try to increase datarate/sensitivity/… : they started with a simple choice, proved that it was feasible and just stuck with it.

Awesome, thank you @Clams. Now I’m getting it.

Here’s my summary:

There’s not really a deep law of physics which says there has to be a 1:1 ratio between the chip rate and the bandwidth, rather it’s done for simplicity and practicality.

The chip rate being equal to the bandwidth (e.g., 125 kHz bandwidth gives 125,000 chips/sec), serves as a stable reference/anchor point. By tuning the spreading factor, you can balance data rate and signal robustness. In other words, the technical constraints of the ratio can easily be overcome.

As mentioned earlier, by maintaining a 1:1 ratio, if you halve the transmit time per chip, you double the frequency spacing, allowing each chip’s frequency to remain distinguishable at the same resolution (ratio of transmit time to frequency spacing remains the same).

Values like 125,000 or 250,000 chips/sec align well with typical PLL (phase lock loop) systems and filtering capabilities, helping avoid excessive out-of-band emissions or hardware complexity.

It keeps the maths simple.