I am trying to push decoded packet data to Wattics platform, they define a strict RESTful API that accepts a certain JSON format with measurement data.
There is no problem to create this required JSON with the decoded data in Payload Formater → Uplink, but sadly when POSTing this data via a Webhook Integration further to Wattics endpoint the JSON created in Payload Formater stage gets wrapped in a lot of additional meta information that not only is not required, but ruins the data structure so that Wattis API rejects the packet as invalid.
Is there a way to disable this additional meta data being added?
As it stands now I can’t see any alternative but to create another third party shim service to send this data to, that would unwrap the unnecessary JSON layer and forward the data further to Wattics. But that would be wasteful, add additional failure point, add unnecessary complexity, and even actual ongoing expense if scale is large enough. All to undo an unwanted additional data processing step in the first place.
I can’t quite believe that there is no better option for a task that must be quite common so maybe I am missing something?
No. You are not the first to ask, you could check if there is a feature request at GitHub and file one if there isn’t.
In you opinion of course. Experienced users know the decoding doesn’t happen when the TTN servers are busy, so accepting the raw data somewhere to transform it (and may-be store it as well) before sending it to your specific platform is a much safer way to handle things.
Thank you for the confirmation, it is valuable to know that we are not missing something!
As a paying costumer who gets charged every month for the devices that we connect to The Things Industries as per our subscripton plan, we feel somwhat entiled to the opinion that the servers should capable of decoding the packets as specified. Otherwise why buy this service at all if it has to be duplicated anyway.
This is the community forum almost exclusively contributed to by volunteers - no one is entitled to anything on here in the context you are suggesting.
Did you take out a support contract? If so, that would be a better channel to ask such questions in the future.
“As specified”? By whom. I doubt it was TTI. Nor the device vendor. There isn’t even a custom integration for Wattics, so it’s not like TTI provided something that they think worked but turns out not to, something you could have evaluated before the fact.
Given the permutations of devices & end-points, it would be uneconomic for TTI to create what you are hoping for. And with all the LNS’s that are around, they have a specific job that ends with the data being passed on exactly as the device sent - there is a bonus service of the payload formatter - but infrastructure to do final decoding & storage - be it on your own servers or a third-party service is pretty normal for all of us.
Perhaps you could ask Wattics why they don’t support TTI web hooks?
I am aware of that. Not sure why you bring up entitlement?
I asked a question - @kersing answered it (and thanks to him for that!)
He also offered a comment on my technical opinion - I answered with clarification on that.
I feel that some level of misunderstandment is involved.
No. The documentation that TTI offers has been very good, and we have had no cause for support contract all year. And even for this issue, documentation stated as much. I just hoped that I had misunderstood something.
Specified by me! I was answering to @kersing comment that under load conditions TTI servers might skip decoding the received packets according to the user defined Payload Formater.
This is what I had an issue with. If I write a script to be applied to each received packet I expect it to be applied “as specified”. I was expressing my disagreement with the suggestion that a paid service should be free to skip this when convenient and force users to duplicate this function “just in case”.
Again, I was not hoping for TTI to have ready integration for every possible platform. That was supposed to be the beauty of the Payload Formater + Webhook, that I can do it myself. And I could, if TTI didn’t go and limit this feature with their insistence to further change the JSON payload after I have defined it.
But this is their decision how to do it, of course. Please note that there are two separate points here:
One is that TTI adds changes to the JSON after user defined packet decoding - that is inconvenient for me, but that is my problem.
And second, the suggestion made that TTI can sometimes arbitrarily skip the packed decoding in JSON altogether - this something that I have an issue with, as I see it as part of the promoted service not an optional feature. (And again, I am not asking the community to do something about it, but I am disagreeing that this is something that I should accept as normal, if true.)
I must say that I am greatly surprised by the hostile tone in this conversation. I will asume that we misunderstand each other.
Talking to them was the first thing that I did. It turns out that they don’t have any LoRaWAN devices integrated yet, to my surprise, so the issue hasn’t come up for them.
For commercial instances that won’t happen fast, if ever, probably only if you start overloading the server with many uplinks in a short time (which can happen with sufficient nodes and gateways and ‘synchronized’ transmissions, a bad idea anyway).
In my (just 6 years) experience with LoRaWAN I found it invaluable to save the entire data set as provided by TTN. It allows diagnosing issues if they pop up.
Btw, you can still request your feature by logging an issue on GitHub (if there isn’t one already)
The uplink JSON format is documented. You add the decoded fields if you implement a payload formatter. The same scheme was used for v2.
TTI don’t arbitrarily skip payload formatter processing. If the JavaScript processing takes longer than 100ms then it is dropped and the original JSON is forwarded on. Otherwise the Application Server runs the risk of becoming thread bound and grinding to a halt.
The principle time this occurs is when a perfect storm of uplinks arrive simultaneously with a fair number of them requiring the larger multi-purpose formatters. A LNS has a limited amount of time to get an uplink to the end system and potentially receive a response to then forward to a gateway to downlink, so time is of the essence.
I’m sorry you feel this is hostile - I was attempting to reframe exceptions, particularly regarding the role & boundaries of the LNS.
Here’s a strategy:
It may seem that the fastest and most efficient resolution is for Wattics to process the JSON which then opens up their service to TTI users. The format is not specific to LoRaWAN so I’m sure they have to process incoming data for other services.
However you then have the problem that the payload format is specific to a device, so you then need something to transform it, but you can’t sure will be processed.
Best practise is that you should not rely on a third-party service to store your original data - they may purge it after post-processing and outright data loss is not unusually.
So your very best strategy is turn on Data Storage so there is a copy on the TTI servers as a last gasp backup, put in a webhook to a data sink for your own backup (one of mine costs just £7/year) and use a webhook to AWS Lamda functions to re-format the data to suit Wattics, see Payload Format to include only measurements
Being an AWS server-less function, all the scaling is taken care of for you. And as mentioned in the above post, may actually be free.
I tracked all the payload formatter issues last year, I don’t recall one for a payload formatter only output as we were having too much fun with the 400KB code people were trying to fit in.
I believe the challenge is that the payload formatter has no access to any data outside of the hex array & port - so no device or app identifiers, the timestamp, signal info - etc etc.
If it was available, it would have to be passed in as an object and then it all runs the risk that we end up with even more PF’s being created that have to massage data that is already been marshalled in to the right format in the heart of the highly efficient Go code - unpacked by the now larger JS code, all of which has to be held in live memory in Redis - which then increases the the server load & processing time etc etc.
So we have comprehensive information on the uplink which we can cross-reference with the messages from the other stack components, we have the convenience function of the payload formatter which I use to display the essential info in the console and we can then take that and parse it to push in to any number of databases, immediate response processing, graphs etc without impacting the core servers.
… to the opinion
A bit agresive quote shortening there!
In any case, I was under the impression that “to be entitled to one’s opinion” is a common expression in English. I apologize if I misused it, I am not a native speaker.
Very helpful of you to mention the AWS Lambda functions, this would appear to allow a somewhat simpler shim than what I had in mind.
I can imagine not everyone being interested in all that. The device identifier can be made part of the URL IIRC and is present in the mqtt topic as well. That id and the decoded data might be all some applications require.
(I personally want the meta data as well because there is valuable information there)
Yes, you can add the App & Device ID’s & EUI’s, JoinEUI and DevAddr to the URL. The App Id is very useful for quickly setting up a Webhook-to-Tab-PHP just for a single app.
At present the most frequent request I see is for device attributes and other info that is in the Postgres database so not available in memory. Resulting in attempts to read the additional data in real time via an API call rather than mirroring the data.
I’ve a new improved no-config Webhook to Database to Dashboard package in the works, which ironically needs the payload formatter!
After I can look at a JSON stripper for PHP, a demo for Lambda and replicating/mirroring data for local processing, something to do mid-Dec!