I was wondering, the default error coding rate for LoRaWAN is 4/5, what would happen if you would send a message with coding rate 4/8?
Using a higher coding rate might allow sending messages in an environment with lower signal-to-noise ratio, or get longer range, because the message is sent with more redundant bits that can help to correct a corrupted reception.
Changing the coding rate is possible because LoRaWAN uses explicit headers at the LoRa modulation level, see http://www.semtech.com/forum/images/datasheet/an1200.23.pdf (page 3, ImplicitHeaderModeOn = 0 which means explicit mode). An explicit header means that the transmitter sends along a header in which things like the coding rate, CRC presence and low-data-rate-optimisation are specified. The receiver will follow this header and still decode the packet even if it does not use the default 4/5 error coding rate.
I tried this by modifying a define in the LMIC library, and guess what … it works:
{“payload”:“YmVydHJpa0BzaWtrZW4ubmw=”,“port”:1,“counter”:7,“dev_eui”:“0000000019800501”,“metadata”:[{“frequency”:867.3,“datarate”:“SF10BW125”,“codingrate”:“4/8”,“gateway_timestamp”:1030392300,“channel”:4,“server_time”:“2016-09-12T19:40:20.953571218Z”,“rssi”:-120,“lsnr”:-14,“rfchain”:0,“crc”:1,“modulation”:“LORA”,“gateway_eui”:“008000000000B8B6”,“altitude”:27,“longitude”:4.70844,“latitude”:52.0182}]}
The field codingrate now has a “4/8” value instead of the usual “4/5”.
I can imagine that similarly, we can also enable the low-data-rate-optimisation bit to get a little bit better performance. Enabling this bit makes it a little easier for the receiver to distinguish between symbols, by encoding only (SF-2) bits per symbol instead of SF bits.
The downside of both methods is that the on-air time is increased a bit, so I think this can be considered abuse of the network because it can increase congestion for everyone. Any opinions?