I wanted to investigate the power consumption for different spreading factors for research purposes. I noticed that my node instantly upgrade from SF7 to SF12 after the first message.
To reproduce the problem I have prepared the following setup:
Example ttn-abp.ino sketch from MCCI LMIC with fixed SF7
MCCI LMIC v4.1 (and testet with v3.3)
Fresh ABP Node in TTN v3 with ADR disabled and reset counter enabled
If i prevent the device from getting downlink messages, it works like i know it from v3. FYI: I disabled downlinks by changing the packet forwarder port for the downlink in the gateway to a wrong udp port.
Please donât do that, if you want to âexperimentâ set up or purchase your own instance. TTN(CE) aka V3 rightly expects associated nodes to correctly handle, process and where necessary react/respond to any downlinks, inc. MAC commands, adr settings, channel settings etc. In many cases if you cripple the node the NS will continue to attempt retry, if even for a short while. This is a waste of both NS resource and the spectrum/gw capacity⌠your experiment may be forcing additional gw downlinks that render said gw deaf to other uplinks in the communityâŚ
Thereâs nothing to âinvestigateâ - you can âdry labâ this since itâs all very predictable.
Slower spreading factors take longer to transmit a message. You can calculate exactly (and I really mean, extra exactly exactly, as itâs a critically key part of the protocol) how long the airtime of a given message takes at given air settings including spreading factor.
You then simply multiply that by the power consumption of the radio at a given power setting, and that of the MCU if it stays awake rather than sleeps until the DIO completion interruptâŚ
I am working on this as part of a scientific thesis. A normal MCU does not always behave deterministically, especially when confirmed messages come into play. A dry test is not an option. But that is not the point here.
My problem is that the NS upgrades the device to SF12 after its first message, which is not normal?
You shouldnât be using confirmed messages on TTN, and really not on LoRaWAN in general.
You also shouldnât base a thesis on uncontrolled air conditions at a given instant in time.
That or controlled lab circumstances are actually the only sort of thing that would have any true validity. Maybe youâd prefer to transmit into a resistive dummy load.
Show the actual contents of the downlink packet from the gateway raw traffic page, and show it as text, not a picture.
Whatâs in the overview image you posted isnât any valid sequence of MAC commands in an obvious encoding, so we need to see the actual raw packet in base64 or hex or already broken down form, thatâs being pushed back towards the node.
Yes, I was wondering about that too. I hope the protocols help. I donât know what to do anymore. Same node (hardware) but with OTAA activation works without problems.
I canât dig into this fully at the moment, but what I think Iâm seeing is some astoundingly huge amount of downlink configuration items, possibly even self-repeating ones, all crammed into a packet of absurd size. Given that they donât fit in the fopts, theyâre sent instead as a payload on port 0, which also means that they get encrypted with the network key and arenât recognizeable in cleartext as theyâd be if a smaller number of items were sent in the fopts.
Itâs entirely possible thatâs breaking LMICs parsing.
I notice far too many of your uplinks have the frame count zero: is it possible your sketch is crashing and rebooting?
You should probably try an absolute vanilla example sketch for TTN ABP in your region.
But again, the tests youâre actually trying to run on donât belong on TTN at all, and for your finding to be repeatable theyâd have to use simulated circumstances - transmitting into a dummy load, etc. Any attempts to consider actual network performance would have to be based on a theoretical model, since you canât permissibly collect enough real world data to factor out uncontrolled variations.
Forget the background. The introduction by me was well intended, but now we have more discussion to that as i inspected. I will stop my tests and switch to calculations.
But, the problem remains that with ABP not working with the " absolute vanilla example sketch for TTN ABP" from MCCI LMIC(.
I have just done a test against a ChirpStack instance. When ADR is enabled in the code, it also switches to SF12 after first uplink, but when i disabled ADR (Example code with the only change that i add LMIC_setAdrMode(0);), the problem does not occur! Same code with TTN, problem exists.
Youâre still showing an illegal repeat of frame count 0.
You need to figure out why thatâs happening and fix it.
I would watch the serial monitor and see if your node is crashing and restarting.
Also you may want to disable âreset frame countersâ; I could imagine ways in which trying to non-compliantly waive that part of the LoRaWAN spec could actually be contributing to that absurd stackup of multiple MAC commands.
But⌠if you have chirpstack, and youâre only sending dummy data anyway, why does interaction with TTN matter to you? Itâs possible youâre hitting an unexpected case bug in the TTN stack - but the fact remains that what youâre doing is not normal usage - or all sorts of people would be hitting it.
Not what I asked - have you added the extra frequencies - if you look at your screen shot youâll see there is a button for it. The Network Server wonât know the device knows the full 8 channels which is why it is trying to send those settings.
The Network Server wonât know that the device is set to 1s which is why it is trying to send that setting.
Are u sure? Help text says, its only needed if the device use non default frequencies, but i have insert the frequencies = no change.
OTAA works fine. >120.000 Messages i the last 6 Months from 6 nodes. Same hardware, same software, but ABP makes problems
With ABP still the same problem with TTN v3. After first uplink with SF7, the device switches to SF12. Same code had worked with v2 and with chirpstack it still works. I canât make sense of it.