Like I suggested before, I’m not really a JavaScript programmer.
As soon as StackOverflow is offline, I loose all my JavaScript capabilities.
I’m more C++ oriented.
Sure I can debug JS code in the browser, but that’s not how I work.
Apart from that, if the JS decoder can no longer be > N kB in size, I cannot use it in the TTN console anymore for testing. That was the point I wanted to make.
ESPEasy users for sure need their own TTN account as soon as they want to use this platform for receiving their data as I don’t want anything to do with data that’s not mine.
My goal is to make it as easy as possible to use sensors to collect data yourself and encourage users to be as creative as possible in how to use it.
This is also why I took quite a lot of effort to make the TTN link from ESPEasy as easy as possible, so you won’t have a steep learning curve to get started and thus also added an “ESPEasy vendor definition” to pick from.
When using this route, there is a single decoder for potentially all ESPEasy users, but it is clear some will make their own plugins and will extend on the decoder, so there will be forks of the decoder script for sure.
I really do understand why some limits may be needed here, so therefore I would like to know how something can be implemented to allow scaling up without adding a lot of complexity.
One way could be to have some decoder which only needs to be fed a (small) JSON per node to present the strings needed to make the decoded data structure more readable.
But that’s already a rather different approach from what’s currently present in the TTN environment.
How ESPEasy users receive and process their data is unknown to me.
Like I said, I use it via Python and MQTT, but as soon as the number of options to get the data grow, the more complex it gets to implement decoders and/or detecting whether it needs to be decoded per message. Therefore I’m not sure that’s the way to go.
If the TTN backend does have performance/resource issues decoding using JavaScript, I can also turn my decoder into another language and like I said if decoding and “interpreting” could be separated to ease resource usage, I’m all in favor of implementing it for ESPEasy.
But then I need to get an idea of what really causes these performance issues.
What is the main problem here?
I can imagine having a 100k decoder script for every node out there is using quite a lot of storage.
So if that can be split into let’s say a 100k decoder for all ESPEasy users and an 1k description JSON file per set of nodes, that would make already a big difference.
The decoding itself is only needed when a message is arrived, so that may scale somewhat linear with the number of messages being processed.
And like I said, we can of course distinguish between “decoding all” and “only decode header” for those that may process the data on ESPEasy nodes. (N.B. there is also RpiEasy, which is built for Raspberry Pi, same UI, rules and plugin/controller structure as ESPEasy)
So in short, I must understand where the real bottleneck is and also whether or not it is feasible to split the decoder and node description into some form where it tackles the potential bottleneck(s) and still makes it as easy to use as possible.
But I like to avoid having to maintain several decoders for several use cases.
One to run on the TTN servers and one inside ESPEasy nodes would be doable, but not more.
As you may have seen, the current decoder already defaults to 1…4 floating point values with 4 decimals for those plugins that are unknown to the decoder. So you will not see a descriptive label, but it may already work as that’s the format usable for the majority of ESPEasy plugins.
The “description JSON” I mentioned could be as simple as just describing the labels for a specific plugin and/or task and the decoder can be extended for those plugins that really benefit from a plugin specific decoder like the GPS and sysinfo plugin I mentioned (and some more already implemented)
As ESPEasy is (finally) getting some more attention from others also actively (and frequently) committing code, the number of plugins is growing quite fast lately. So I would not be surprised if we reach the current limit of 255 plugins in the next 2 years.