Arduino Pro Mini (ATMega328P) with a RFM95W. Example code (MCCI LMIC Example) i use for testing:
main.cpp (12.0 KB) platformio.ini (817 Bytes)
Circuit diagram: lorapromini_shematic.pdf (121.2 KB)
Thank you in advance!
Arduino Pro Mini (ATMega328P) with a RFM95W. Example code (MCCI LMIC Example) i use for testing:
main.cpp (12.0 KB) platformio.ini (817 Bytes)
Circuit diagram: lorapromini_shematic.pdf (121.2 KB)
Thank you in advance!
The platformio.ini references v3.3.0, which is/was pretty compliant but not much different in compile size than v4.1.1 but there are changes to the channel handling.
That said, my original hand built devices of ProMini + RFM95 were built with a much older Classic LMIC (matthijskooijmanâs) and setup with ABP. Two are in the garden (somewhere) but migrated over from v2 to v3 without issue.
Iâm not sure about some of the add-onâs in your main.c, particularly the saving keys to NVM - not really required for something thatâs meant to be on all the time &/or is technically session-less. Iâd strongly advise using the vanilla ABP example that comes with v4, but please change the uplink interval to >300s so if itâs left running, itâs FUP friendly.
Sorry youâre having problems.
It is kind of hard to keep straight whatâs going on from the description and the discussion. However, nothing in the V4 LMIC will prevent it from listening to downlink mac commands in response to class A uplinks.
The description suggests that you are getting a downlink, but thatâs kind of irrelevant, because you can override all that after itâs all done.
Why do you not call LMIC_setDrTxPow()
right before each uplink, to force the data rate you want? The LMIC always will honor the most recent setting. As long as you wait until the previous uplink is complete â which may take a while, if there are mac downlinks after the initial uplink â that call will override anything the network tells you. Use LMIC_queryTxReady()
to determine whether the LMIC is ready to accept another uplink. Pattern is something like this. Sorry that I have no time to test.
if (LMIC_queryTxReady()) {
// force the datarate we want; check for success
bit_t fSuccess;
fSuccess = LMIC_setDrTxpow(desiredDr, KEEP_TXPOW);
if (! fSuccess) {
Serial.println("setDrTxpow failed!");
while (1)
/* loop forever */;
}
lmic_tx_error_t txError;
txError = LMIC_setTxData2_strict(port, data, nData, /* confirmed */ 0);
if (txError != LMIC_ERROR_SUCCESS) {
Serial.print("LMIC transmit rejected, error code "); Serial.println(txError);
while (1)
/* loop forever */;
}
}
As always, when debugging, itâs important to check all APIs to see if an error is returned. Also, bear in mind that these APIs have changed between classic LMIC, V3.3, and V4. The above example is for V4; but the error codes and so forth may not be defined in earlier versions, and the APIs wonât always return useful error codes if they fail.
Good luck!
If I do the PRâs, can we change the uplink interval for the TTN examples to something that is within FUP?
Also, the âHello worldâ payload is cute, but is the direct opposite of all the advice here - perhaps I could change the payload to a byte array too?
Hopefully not in reasonable situations.
But earlier in the thread they got an absurd run-on sentence of multiple stacked MAC commands that had somehow been queued up (perhaps because they were previously ignored, and maybe some interaction with ignoring frame count resets allowed that).
YNHpCyYAAAAAUghdhbGWw4Y0Ck4EsCUrWs2T/1mt7BBq8bVENVzlFfRjX4va/a5C/b2UH87RTvU=
Which unfortunately is long enough that itâs sent as port 0 traffic encrypted with the network key rather than in fopts, so we canât know exactly what it contains, though there was an earlier log file that gave some of the details.
It wouldnât be hard in concept to imagine LMIC - especially on a '328 - choking on that as itâs likely outside of test cases. What would happen with an equivalent length of legitimate, repeating MAC commands is perhaps something that could be tested.
But where that downlink comes from is likely itself some sort of unusual case in the server triggered by âshouldnât happenâ behavior not of the node stack per se, but of how itâs being (ab)used in the posterâs desire to do atypical and unnecessary on-air transmissions to âresearchâ something that they should simply be modeling or testing into a dummy load in the lab.
Hopefully not in reasonable situations.
Well, I was perhaps being over-precise. It is always listening during the receive windows. The radio may not catch the packet; if caught, the LMIC will look at the packet. It may not act, but thatâs a different question. This is true in both OTAA and ABP scenarios.
YNHpCyYAAAAAUghdhbGWw4Y0Ck4EsCUrWs2T/1mt7BBq8bVENVzlFfRjX4va/a5C/b2UH87RTvU=
Which unfortunately is long enough that itâs sent as port 0 traffic encrypted with the network key rather than in fopts, so we canât know exactly what it contains, though there was an earlier log file that gave some of the details.
Itâs pretty easy to decode this if you have the network key. There are several online or offline decoders that will help with this. For ABP, you have the network key, so⌠why not decode?
OP might, I donât unless its leaked into the thread somewhere I didnât notice.
Okay turns out it was easier to grep these out of the earlier posted logfile than I thought.
And in looking through them, the ârepetitionâ I was seeing in an earlier hasty attempt was because itâs adding all the additional channels it doesnât know if the node already knows about.
MACCommand.RxTimingSetupReq
MACCommand.LinkADRReq
MACCommand.NewChannelReq
MACCommand.NewChannelReq
MACCommand.NewChannelReq
MACCommand.NewChannelReq
MACCommand.NewChannelReq
MACCommand.RxParamSetupReq
And most of these do look like theyâre getting responded to by the node.
In my mind question then becomes if as a side effect of implementing the intent of one of them LMIC coincidentally or even in a spec-mandated fashion goes to DR0.
And the answer is, of course, thatâs exactly what a MACCommand.LinkADRReq does! Only itâs not obvious because the logfile is only breaking out the enabled channels portion of it, and not the data rate and power command portionâŚ
The heart of the matter is that LoRaWAN doesnât make it possible to set the channel map, without also commanding the nodeâs datarate and transmit power. In a way, the spec makes a bit of ADR mandatory regardless if one wants it or not.
But TTN may not yet have a good âADR link modelâ for a node itâs still trying to basically configure (and one thatâs not requesting ADR anyway) and so just be playing it safe with DR0.
LinkADRReq
carries the DataRate and TxPower in byte 1, and that sets the ârequestedâ data rate and power for uplink.
NewChannelReq
carries the min and max permitted data ranges for each added channel (in addition to frequency); but it doesnât mandate the current requested datarate.
So the network is definitely telling the LMIC something about what data rates to use. And the intent of the V4 LMIC is to follow spec-mandated requests. Would have to look more closely at the payloads to know what the network is saying.
The LMIC V4 code does not allow ABP nodes to prevent the node from automatically following the network (at least temporarily). The client code for an ABP node basically has to repeat all the downlink-adjustable settings before each uplink, because the LMIC unconditionally overrides them based on MAC-layer downlinks.
Things get even more complicated if you consider Class B or Class C; but as these are not supported in V4.1, itâs not (yet) an issue.
Current best practise is to use OTAA. Failing that, second best is to enter the channel plan on the console for the device so that the NS doesnât feel the need to tell it what it can use.
But rather more importantly, based on the number of times I and others have dealt with issues here:
Please?
Failing that, second best is to enter the channel plan on the console for the device so that the NS doesnât feel the need to tell it what it can use.
Well, the LMIC is not TTN-specific; ABP application requirements are network specific; so you canât guarantee you can even do that.
If I do the PRâs, can we change the uplink interval for the TTN examples to something that is within FUP?
I accept PRs, though I admit it takes me a while to get to it. I suggest you start by raising an issue on the LMIC; I hadnât thought about the FUP in the context of the sample apps because I view them as limited utility â check to make sure the device is working, not âstart from thisâ. Of course, thatâs just short-sightedness on my part.
Bear in mind that FUP (Fair Use Policy) is a Things Network concept, not a LoRaWAN concept. We have a lot of users who are using other networks: ChirpStack, Helium, etc. But the âttn*.inoâ sketches could be modified to adhere to the policy â as long as we clearly document that the delay is needed for TTN compliance (not LoRaWAN compliance).
Sorry that I donât spend much time on this forum; itâs not because I donât care, but ⌠Iâm not a person of independent means. I follow issues on the LMIC github site much more closely.
By the way, the biggest single problem that people report against the LMIC is that they canât get downlinks working. It would be wonderful to improve the sample sketches so that thereâs a step-by-step way to âget your radio workingâ.
Iâm not saying remove it from LMIC, Iâm putting it in the context of its application to TTS.
Already anticipated.
Same on here. Not usually an LMIC issue, more about placement of gateway & node (too close usually, sometimes 3rd party too far away). But there may be something I can create with some bullet points in - one for general and one for TTN (where the FUP is 10/day and the preference is 1/fortnight).
A universal challenge - I have LMIC under control so I answer questions here and donât spend much time on GitHub.