Class C time frame algorithm with prio was a bit tricky, but I think I got it.
How can I test it? Are the util_* tools useful for this? I do not have a very heavy downlink network.
I don’t think the util_ tools work at all with the packet forwarder, but are rather things you can run instead of it for low level tests.
Could you maybe spin up two nodes, one TTN and one private, that use a common trigger so they both uplink at the same time with the confirmation request bit set? Or you could just send the packet forwarder “invented” downlink traffic with conflicting timing and differing priorities. One of the things I really like about MQTT-in-the-middle backhaul schemes is that it’s far easier to write tests and analyzers that look at what is happening, inject test events, etc.
Ok, I have a test version, for the brave. The filter function has been tested, the prioritization not.
What did I change?
Added boolean parameters priv_filter, pub_filter and priority to server configuration directive
priv_filer:
filters private net id packages, if set true, all data uplink+downlink packages with net id 00/01 are not send to and not received by this server
defaults to false, which means no filtering
pub_filter:
filters public net id packages, if set true, all data uplink+downlink packages not having net id 00/01 are not send to and not received by this server
defaults to false, which means no filtering
If filtering was detected a message will be shown during startup.
priority:
if set true (only one server is allowed), all downlinks by this server are prioritized over all other servers, conflicting packages of other servers in the jit queue will be silently dropped and not sent
defaults to false, which means all packages from all servers are handled equally, which could lead to problems as discussed in this thread
If priority was detected a message will be shown during startup.
Some implementation remarks:
-
If priority is enabled the fetch thread will only fetch one package from the gw per fetch cycle. That was for me (and now) the easiest way to implement it, as usually 8 packages are fetched per cycle, then fused into one udp package and sent to all servers. Not useful for per package filtering. Maybe I change this later by rewriting that logic completely and ordering based on net id. How big is the RX buffer of the gw anyways?
The fetch cycle had a sleep of 10ms, when no packages were received. To counteract the additional latency off one fetch cycle per package and sending I decreased the sleep time by half. My measurement showed that this adds an additional CPU load (from 2.4% to 3.6% on a pi zero w). I might be too cautious here or too less, but I have real world data missing. -
The not consistently mutex protected peek and dequeue order in thread_jit might now lead to inconsistencies if packages are removed in between a peek and a dequeue. I fused both functions into jit_peek_and_dequeue.
-
There a many different cases on how the queue could look like when time frame alignment with a Class C ASAP package happens. I did not test this at all. I am pretty sure the I got basics right but the devil is in the detail. The idea was to simply ignore all low prio packages when checking for time collisions. In the final overlap loop (criteria 3 overlap check) I then simply dequeue all packages that conflict and have lower priority.
Any constructive code review and testers are welcome.
Could you update the main readme.md at the top to indicate there are (untested) changes?
I did on top, right after changelog chapter and in the text of the prio description.
I suspect that will cause problems in heavy usage (wasn’t part of the concern with USB concentrators on busy networks getting all the received packets dequeued before more came in). Couldn’t you just do the filtering at the UDP packet assembly stage? I’d actually have been tempted to have the packet forwarder talk to a single local intermediary process, and have that parcel packets out to destinations, especially as it might want to do format conversions (like feeding the private server json or protobuff over mqtt, as something like LoRaServer typically wants)
The not consistently mutex protected peek and dequeue order in thread_jit might now lead to inconsistencies if packages are removed in between a peek and a dequeue. I fused both functions into jit_peek_and_dequeue.
That complexity is why I opted to just zero the duration (what the code calls post_delay) of any packet in the queue the loses priority to a higher priority packet being added, but leave them in the queue for the consumer to remove and ignore. (If the packet being added is lower priority than a conflicting one already there, the usual behavior of declining to add in the case of conflict endures)
That said, big congrats for taking your ideas from the forum to a code editor and actually implementing them!
So I did decide for an internal simulated test of the new jit queue and uploaded that new version.
I defined new _SIM package types that only differ in this way, that they are not send out by the radio. All other handling is identical. A jit_injector_thread then injects a bazillion downlink packages. I guess much more then real life gateways need to handle. I tested a lot of different scenarios. Especially when the queue is full. Does that even happen?
I also moved the filter to sending stage as suggested.
After looking at thousands of loglines and the interesting cases I am pretty sure it works like intended. I am using this as a daily driver from now on for my home installation.
So back to the original question. I (now) give TTN packages priority over private ones and make sure that private packages are not send to TTN servers. That should be as much in line as possible with one shared gateway and the current protocol specification, if I understood everything.
I plan to implement this also at my university. If the package loss for the private network is not acceptable I can at least argue to get money for a second gateway and antenna spot.
@cslorabox Long time no see. I am running this patch since over a year without any problems. Can I assume that with V3 around the corner this patch gets obsolete? Would I be able to install ttn stack V3 locally on a pi, have e.g. my local netId 00/01 and configure some kind of peering magic to keep 00/01 netids in my local LAN but route NedIs from TTN in/out?
The peering magic is called packet broker, however to be able to peer you need a Lora alliance assigned NetID or ‘rent’ part of the TTI space. The last option is not available yet and as it doesn’t scale I assume will only be available for larger community installations (things will become clear later this year I assume).
Thank you for your answer. As far as I understood, this is referred to as home network in the peering information. Meaning I can get a TTI Subnet and then will be able to get these home addresses routed through packet broker also from other locations than my own. From what I have seen the stack can be forwarder and/or home network. Maybe it is enough to configure the stack as forwarder only because the stack keeps 00/01 NetIds internally anyways. But maybe this is a question for another topic area? (And yes I try to be patient)