Though I know there have been several post about timestamps I would like to ask for some best practice example how to handle timestamps correctly.
Usually we should use metadata.gateway[].time to get the time of reception of a message. For some reason we had single messages with 1970´s timestamps. in the past. Maybe the GW had no ntp after restart?!?. So we can also use metadata.time or even local time as a fallback.
That works, but does not look very nice in our code.
Maybe somesone can share a Javascript code for this?
There’s no universal answer to this, it depends on your need.
In many cases it’s better to use a server timestamp, or even the time the data hits your consumer - there will be a little bit of latency there, but hopefully the information is basically right.
In contrast if you use gateway times, then not only can their clocks easily be wrong, but they may not agree with each other.
If you want to validate that you’re not being fed extremely stale data, you may need to look at the frame counters and apply some knowledge of the node’s behavior (transmit interval, etc).
Well, generally we need to know the time a packet was sent. Under normal conditions all available timestamps should be ok for most users, as differences are in a range of some seconds. So, in theory, everything is easy.
In practice, things are more complicated, specially if you are usnig Javascript. There are three or more valid options for a timestamp:
metadata.gateway[].time: One ore more timestamps from the gateway (shoud be time of lora-msg)
metadata.time: Backend time
new Date(): time on the integration server.
Problems to solve:
Some gateways use a message buffer store messages on a local network failure. These are sent as a bulk message after reconnection. So all timestamps except gateway time are wrong
All times need be UTC not local time
The gateway or the application server time may be wrong
in Javascript we get an exception, if a variable is not set or has a wrong type. So we should test a variable before evaluating.
So, our code (in node red) is now a bit lengthy, and we wanted to know, which time was used, so we set “timesource”:
if ("metadata" in msg) {
if ("gateways" in msg.metadata)
{
var t1= new Date(msg.metadata.gateways[0].time);
if (t1>new Date(90000000))
{
msg.time = t1;
msg.timesource="GW";
}
}
if (!("time" in msg))
if ('time' in msg.metadata)
{
var t2 = new Date(msg.metadata.time);
if (t2>new Date(90000000))
{
msg.time = t2;
msg.timesource="TB";
}
}
}
if ((!("time" in msg)) || (msg.time === null))
{
msg.time = new Date(); // Default setzen
msg.timesource="NR";
}
return msg;
So, maybe someone has a smarter solution to do the job?
Hopefully these are flagged somehow - such packet queueing is not really fully complaint with LoRaWAN, since it’s both impossible to send downlinks in response to stale messages, and in a situation where any other gateway has also heard a message and reported it in a timely fashion, the expected uplink frame count will have advanced causing the stale packets to be unable to validate when they do reach the server.
But this is also the reason I was saying you may need to look at the frame counter and apply some plausibility rules.
You can’t really trust timestamps applied by gateways that you do not manage.