I’m currently exploring the performance of packet sending strategies in LoRaWAN, particularly focusing on the impact of edge computing on delay reduction. Here’s the setup:
No Edge Computing Scenario:
Devices send packets every 30 seconds, independently of each other.
Edge Computing Scenario:
Devices equipped with edge computing capabilities read sensor values every 30 seconds.
They aggregate these values over a set number of readings (let’s say, N readings).
Once N readings are reached, the devices calculate the average value.
Finally, the devices send a packet containing the average value, resulting in fewer transmissions compared to the no-edge scenario.
Now, I’m curious about the following:
Does sending packets every 30 seconds (no edge) vs. every 5 minutes (with edge computing) actually decrease delay?
I’d appreciate any insights, experiences, or suggestions regarding this topic. Has anyone experimented with similar setups or encountered challenges in optimizing LoRaWAN packet sending strategies?
Will massively breach TTN FUP! - How bad will depend on message size and SF used. Even smallish packets (Say 10 bytes) on SF7 you need to think >3mins! Even Zero bytes means >2mins see(*) https://avbentem.github.io/airtime-calculator/ttn/eu868/10
For any given payload size raw vs edge makes no difference as Tx time same, and indeed even if averaging and sending other data elasticity of message size doesnt always mean larger packet = longer message as there is a stair case efffect on Tx time which has been explored before (GIYF) again exoeriment with theoretical values in above link.
Obviously time to updated value is longer (as is time to 1st value) if you average then send.
(*) where I also already gave you the airtime calculator link!
If you want to come at this anew then you need to be very detailed and very specific wrt the LoRaWAN or TTN aspect you want to get help with - as for the processing you do as noted before:
“What you want to know is how long that data processing takes as part of your edge computing process (perhaps detecting thresholds, tracking averages, sending an alarm vs data, running a machine learning algorythm, or what ever)… only you can determine that, no one here on the Forum…”
Also as this is a ‘project’ for you (assumed student!) you need to show what you have discovered/researched…we won’t do your homework for you!
Just to add - have you considered (in context your (as yet unknown) application) what the impact of packet loss would be? Loosing one raw value in say every 10 (10% packet loss) may have little impact on your use case, but what if you loose an averaged value that represents a longer period of monitoring - is that more consequential? You might want to check out ‘working with bytes’ and also consider windowed messaging systems where (say) e.g last three sent values are repeated providing opportunity for infill of missed data and other such schemas.
Also be really clear (in your mind) what is minimum need for sensor update (plus perhaps a bit extra for redundancy) vs what is practical/possible for a given technology - just 'cause you ‘can’ send every 3mins or 30 mins does not mean you ‘should’ send every 3 mins or 30mins, e.g. does the air temperature or soil mositure value or whatever you are monitoring really change so fast and with such consequence that you must send/capture so frequently? Remember LoRaWAN is great for small amounts of data sent infrequently or with low update rate (especially if running on batteries) and it makes use of a valuable shared resource - RF Spectrum - that is for all not one selfish application of scaled deployment
Closing thread due to failure to regard previously notified FUP, a rehash of a previous question, a fishing expedition for coursework and, to emphasize, the FUP issue.