Is there a path to follow to synchronize the time of a node to the time of a server?
We need accurate datetime on our product to be synchronized with a server, but the transmission time from Gateway to Node is uncertain; datetime frame could deviate seconds in worse case.
I’m afraid GPS is the only option, but that comes with a price for hardware and battery life.
Class B devices need to synchronize relative time to know when to open a receive window. I don’t think The Things Network will support Class B soon, but maybe Section 9, “Principle of synchronous network initiated downlink” in the LoRaWAn specifactions might get you some bright idea?
Will you even be able to keep time once you’ve somehow set it? If not, then getting it from the server also needs you to take into account that you might not be able to synchronize often enough. (Like you cannot send/receive messages every minute.)
For future readers (it seems this is not accurate enough for you): if it’s about timestamping data you’re sending to the server, then it might be easier to just send it as soon as you’ve got new data, and let the server add the timestamp (which it will do anyhow).
Ok, thought so…
I am thinking of calculation total frame time (which is quite fixed depending on the SF and bitrate and frame length) and use that as a correction when a timesync is received, plus the time of the last message.
Somehow, we should be able to have some accurate timesync then.
Our setup needs datetime locally, which we include in the dataframe.
Ah, note that the TTN Tool actually shows the time on air. I still need to submit a bug fix for that, but some details in a post in “Spreadsheet for LoRa airtime calculation”.
And then your server can calculate that, and subtract it from the receive timestamp; no need for the client to keep track, unless it’s not sending the data immediately?
I have a similar use case: using a node to get the actual absolute time from somewhere and pass it through to a clock. The clock is connected to the node by serial Interface, using the Hardware USART of the node.
It is not necessary to keep the time synced on the node, syncing time once per day or per week is appropriate, because the connected clock has high accuracy.
then you need an external GPS antenna, but an accuracy of 1 second is nothing, why not use a good RTC module ?
I don’t see the point of using the TTN network to sync a clock, pls explain ?
The point is to have a time source for sites, where GPS reception is impossible, even with external antenna (think of big buildings indoor and tunnels, not in .NL), and when the node hardware has no GPS, GSM, or other time source on board to sync it’s RTC, due to cost and/or batterypower. So there is a use case, at least I do have one (with many nodes).
Back to the question. LoRa has a built-in latency for class A devices of 1 second when using RX1 Slot and 2 seconds with RX2. When using RX2 datarate for downlink is fixed. The node itself knows the datarate of it’s Uplink. So it should be possible to give a good estimation of latency on the nodes side, when querying time over a LoRa RX2 connection. This would result in a chance to calculate absolute time over a query-loop, like NTP protocol does.
The time responder (“NTP-server”) could be a process running on the LoRa-Gateway, to keep loop times for time requests short.
So far my theory. Haven’t tried it yet, wondering if someone did it already?
Any updates on how you solved this problem? I’m in the same boat where i need to syncronise the RTC after a battery change. So very interested to hear your solution or what you have tried since last post.
…make sure the node has joined before the first data is sent. (For example, when not joined yet, LMiC will start joining as soon as the first data is queued, and one does not know how long joining can take.) Once joined, the node will send right away, unless its LoRaWAN stack implements some random timeout, like mentioned in the specifications, emphasis mine:
2.1 LoRaWAN Classes
…
The transmission slot scheduled by the end-device is based on its own communication needs with a small variation based on a random time basis (ALOHA-type of protocol).
If no such random variation is used, or if the used value can be added to the data, then the node’s time can be calculated based on the time the packet was received.
(Well, if one knows if the gateway’s timestamp and/or the backend’s timestamp are added when reception starts, or when it’s complete.)
When only synchronizing on battery change, then I’d guess that the gateway/server time is much more precise (as it can sync to internet time regularly) than the node’s RTC which will suffer drift?
It’s a bit old style but there are several atomic time transmitters stations you can pick up around the world with low cost hardware, very low power consumption and great signal penetrability
You are probably aware of it: note that the current gateways are half-duplex (so cannot listen on any channel/SF when transmitting a downlink), which is why downlinks are limited to at most 10 messages per day. So any use case for which the gateway/backend timestamps suffice should really, really, really just rely on that.
(That said: I’m curious what you’ll come up with!)
This seems sufficient for our use case, because our nodes have buffered on board RTC which can keep time up to 7 days accurate within +/- 0,5 seconds. That means we need only once in 7 days a time sync with backend to keep RTCs of nodes synchronized.
@Verkehrsrot We’re looking to do something similar (sending a clock sync message once a day or so, in order to keep a local high-precision RTC from drifting over time). Unless I’m missing something, this is as simple as implementing an application-level downlink message that the node understands (i.e. contains the time value–in our case “seconds since the Unix Epoch”). Did you find that this is any more complex than I have described?
Theoretically that sounds plausible. You will have to take lost downlinks into account though. So your node should confirm downlinks, which is a bit the opposite world. Otherwise the node might still get out of sync, or your gateway would have to send a lot of synchronisation downlinks, possibly violating airtime.
Question is whether LoRaWAN is the right technology for this job. You will be scheduling downlinks on gateway level, while that’s the job of the network server. That will effectively make your gateway non-compliant.
In my experience of working with WSN’s for phasor measurements: if you want pinpoint accuracy, you have to shell out some extra bucks.