TheThingsNetwork is a radio network to send small pieces of data from nodes to a central system, e.g. telemetry or measurement data. There is a high degree of asymmetry: many nodes that produce and transmit data and only (relatively) few gateways that receive data.
It is possible to send data from network to the nodes but that is not it’s main mode of operation. Most typical nodes are class A and they can only receive data just after they have transmitted. It’s certainly not real-time and you can send only a few messages per day.
If you look at the source code for v3, there is a fully worked example in the files with ADR in their name.
I very much doubt there will be an API for custom ADR any time soon, if ever, lest people try to fiddle with the DR with the potential mayhem in a localised area with devices taking longer than necessary to transmit.
If this is a student / research thing, you could download a copy of the stack and make alterations but I think that the subject matter experts are the TTI developers, so I’d consider their implementation to be a gold standard. You could compare & contrast with the Chirpstack codebase as well.
Just for research: Write a custom ADR algorithm to get better results.
I know it’s not suitable for most people, but it’s always good to provide such a MAC command that can operate the device’s speed and power consumption. At least it can stimulate people’s creativity.
There are indeed ways in Chirpstack to do this, but they don’t provide complete code examples to design a customized one.
I don’t even think my idea is very impractical because many of LoRa’s papers have the idea of resource allocation like better designed ADR, but most of them only use the simulation tool method, rather than using the code on the a real LoRa server.
By what subjective or objective criteria… better for whom? Node owner, gw owner, community generally, infrastructure/back end provider? As measured by a) likely success of messages from given node getting through, b) through life-cycle battery energy consumption, and therefore field replacement needs and through life cycle cost of ownership, c) minimised on air time, d) node specific or applied to all that NS or gw set of users…how to judge what is ‘best’ for one user vs another or even one node vs another, e) minimised Tx power (similar to minimised airtime) to reduce impact to other spectrum users and risk of overspill of signal raising noise floor or possibility of interference in adjacent areas, f) likely success of highest probability of average node message in area getting through, g) highest confidence for minimum threshold number of node in given area having a confidence level of getting a message through, h) minimise gw receiver congestion time and collision probability (across what statistical mix of channels and SF’s at a given gw) ?.. I could go on but you get my drift. An academic exercise possibly best left well alone or if you must then carried out in your own backend/NS instance and with specific regard to other users of shared spectrum in a given area (there is a reason why a lot of such evaluation and algorithm research is done via simulation vs actually on air!)… hate to be thought of as damping creativity but suggest carefully pick your battles