This isn’t causing me any actual problems, but I do find it interesting.
I have a LoPy4 transmitting a couple of bytes at 30 minute intervals, being received by my TTIG gateway (registered in v2). The v2 ‘Gateway Traffic’ page shows uplinks successfully received each and every every half hour.
When looking at the ‘Applications > End devices > Live data’ tab in the v3 console over the past week, a regular pattern has emerged in the displayed uplink messages:
2 uplinks received, then the stream connection closes at the very moment the next uplink is expected, only to reconnect again a few seconds later. Repeat.
Clicking on one of the disconnection events for more information yields the following expanded pane:
And a reconnection event:
What might be the cause of this? I’m unsure if this message is generated by server-side code, or by some JS/whatever in the browser. FWIW, the link at the bottom of the above image to the ‘Data Formats’ documentation doesn’t appear to contain anything relevant.
I haven’t yet had the chance to test this with another node uplinking at a different interval (or another computer looking at the console), but it doesn’t seem merely coincidental that the console reports a disconnection at the exact moment an uplink packet arrives, for every third packet, day after day.
I’m well aware of what the console is and is not intended for, hence my opening statement of “this isn’t causing me any actual problems”, but perhaps I wasn’t clear. My integrations are of course working just fine, as you’d expect.
Yes, but counterpoint: imagine designing exactly that functionality into a service knowing you are unable to deliver it. If the console being used in this fashion by hundreds of people is causing issues, then perhaps something like a timeout on live streaming should be implemented? It turns out that it doesn’t matter if I have left the console open, or have just opened it.
Anyway, I digress. I don’t actually have an issue with dropouts from this free service that neither you or I are paying for. What I found interesting, and what was really the thrust of my initial post, is what causes the dropouts to occur for every 3rd packet, and timed precisely with the node uplinks?
Are the stream dis-/re-connections between:
the console web server and the TTS v3 back end,
whatever JS etc is running in my browser and the console web server,
something else?
This is really just a question about how the console is implemented, and not a problem that needs fixing per se.
My apologies, you’ve been a victim of moderator fatigue - this sort of question pops up about once a week at present with far less insight and certainly no searching for prior answers.
I’ve my own console logging in JavaScript for v3 and whilst it appears to be more stable than the TTS console, most probably because it uses the VanillaJS framework so is far less complicated, it still comes to a halt after a period of time.
At some point I’ll look at the detail about what is happening under the hood to see if I can trap for the disconnect and get it to re-connect, but I’m still more on the side of using an integration to get some overview logging. My current thinking is that there are timeouts, ISP glitches (at both ends and in between) and browser hiccups that need to be looked at. How it coincides with the timing of uplinks I hadn’t considered.
I would, for anyone else reading along, re-iterate that TTI are being quite clear about the use case for various elements of TTS, hence console disconnects aren’t a show stopper for them and they aren’t likely to fund more servers so we can all have console logs running. Data Storage is another one, down from 7 days to ~36 hours, it’s for backup, not a database.