Apparently the feather M0 started working on its own (I didn’t do anything specific), I was able to flash the firmware. However I noticed something strange. The distance between the device (interfaced with the Solar PV) and gateway is about 300 meters. Since the feather was not working on the rooftop, I took it to the laboratory where my gateway is present. When I powered the feather, surprisingly, it got connected immediately, and the device started sending data (its junk though). Once it got connected, I took it to the rooftop hoping that it would work. But it was not communicating with the gateway.
It seems to me that range could be the issue, but its just 300 meters, and on top of that, the same feather had communicated a lot of data from the same place in the past. When I brought it back to lab, it got connected and started sending data. It seems that there are no issues so long as the device is in the vicinity of the gateway. No clue as to why was it not able to communicate when the device is taken to the rooftop…
Was just thinking if changing the hardware is really going to solve this problem! I mean to say, what additional benefit can a certified LoRa chip (with full LoRaWAN stack on board) can bring than a radio chip + LMIC. I’m sorry to ask this again and again, I’m still not clear about the difference between the two (Certified module Vs (Lora chip + LMIC)) from the view point of establishing the communication between device and gateway reliably (which was my first question in this post). Hoping to get this doubt clarified. Thanks for your understanding.
What is the construction of the building your gateway is in?
BTW, a gateway in the lab with a node at the rooftop is kind of the other way around of usual deployments. You want a gateway with a decent antenna on top of a building, not inside it, if possible.
Have you checked the antenna connection of your node?
A module could be as simple as a LoRa radio transciever supporting LoRa encoding or it could also contain a processor running the LoRaWAN stack (or similar). In your question (LoRa chip + LMIC) should more accurately be (LoRa chip + Processor + LMIC stack).
Now LMIC is a piece of software running on the processor. The user application communicates with the processor via the LMIC API. LMIC is a specific software implementation. A company that is producing their own radio + processor modules may or may not use LMIC between the the radio and the processor, that is, they could be using some other software stack on the processor to interact with the radio.
Irrespective of whether we are talking about modules that contain only LoRa radio transceivers or modules that contain a radio + processor, these are available as either certified or non certified.
The real question “Module + Embedded processor” to handle the stack, or “module plus external processor running the stack” comes down to what level of abstraction you want between your application code and the radio transceiver and how much you really want to be involved in the controlling the operation of the radio.
A module that incorporates its own stack (LMIC or otherwise) is simple to implement, and consequently, your product can have a much shorter development cycle. You typically communicate with these modules via a serial port. If there is a requirement to upgrade the software for this type of module, the update from from the manufacturer.
If you go the other way and build a solution using your own processor and LMIC (or other software stack), it is now your problem to ensure your application does not impact the operation of the radio or LoRaWAN protocol and you are responsible for ensuring the software stack is maintained.
From a cost perspective, a radio module alone, non-certified in very low quantities costs about $5 whereas a module with than embedded processor that is FCC certified is around $20 in very low quantities. As already discussed earlier in this thread, pre-certified modules can significantly reduce the cost of bringing a product to market provided your system is designed within the constrains of the certification of the respective modules.
Since it was working fine in the past (I mean since it was transmitting), I did not bother to move it outside of the building. I’ll move it outside of the building, and see if gets connected.
Upon reading your questions, I now have a doubt. I’ve set the spreading factor to SF7, and the ADR is turned off in the LMIC code which I’m using. Could this also be a reason why it’s not able to communicate?
Is there any specific test to do that? Something like S11 and S21 measurements using VNA?
Ah, it’s working after changing it to a different location (removing the distractions). I’m in the process of putting it inside the box, and moving it outside of the building.
Interesting to see this discussion, it seems like it is about “how to make end node” and it surprises me to see than no one is mentioning Semtech’s lorawan stack.
I work on making an end node myself, and it is a painful experience. Everything else is working (gateways, I use R-pi based and indoor gateways. TTN and http integration, to my own app server and in the end, a javascript map app). But I struggle with the end node.
As far as I have seen, I find only two proper ways to make an end node (software) from scratch and that is to use either the “Semtech LoRaWAN endpoint stack”, https://github.com/Lora-net/LoRaMac-node, or use mbed and start with there “LoRaWAN network interface”, https://os.mbed.com/docs/mbed-os/v5.15/apis/lorawan-api.html#lorawaninterface-class-reference.
For my project, it is difficult because I use a micro controller not supported by the Semtech stack. But for The Feather M0, it should take little effort to get the Semtech stack run, as there is already a SAM board supported. And for mbed it should work almost out of the box.
I am not to happy about mbed, it is just to big. I like Zephyr much better, it is much cleaner and easy to understand, but it have no LoRaWAN support. So now I working on two approaches: Use Semtech LoRaWAN endpoint stack on my own OS, or port the stack to Zephyr. I use the mbed alternative as a kind of reference, since it seem to work on my board (tested it first time yesterday).
There are multiple options, depending on how you define from scratch. For speed of development and easy certification you could go for one of the LoRaWAN modules that contain their own stack and interface with your controller.
If you mean you want to integrate the stack yourself, yes then you are setting yourself up for a lot more work and ‘pain’.
BTW, LMiC is a valid option for some controllers, there is a version that is actively maintained.
Yes, I have integrated the stack myself, and everything work, both on Zephyr and my own os… Except for one thing:
I’m not able to receive the Join Accept message. But with ABP_ACTIVATION, I can both send messages to and receive messages from the gateway.
This is the first time I try to use this forum and I’m not sure if it is the right place to ask either. But, if nothing else, I may get advice about where to ask.
As mentioned earlier, the stack run under mbed on my own hardware (nordic nRF52832), I suppose the most likely cause for the problem (since everything else work) is the RX window timing. So I put in some timestamps in the code and i found there was a time differences of about 12ms, for the mbed stack compared with my own porting of the Semtech stack.
I use the latest develop branch (https://github.com/Lora-net/LoRaMac-node), and I put a timestamp sample in before the "RadioEvents->TxDone( ); " line (1608) in the …/src/radio/sx1276/sx1276.c file. The next sample is taken after the “TimerStart( &RxTimeoutTimer );” on line 1020 in the same file. This should give one timestamp after TxDone and one before StartRX. I do the same in the mbed source (although the mbed stack is completely different, the low level sx1276 driver seems to be the same), and I get 12ms time difference for the two.
I manage to get this timing to be equal, within some fractions of a ms, by changing the EU868_JOIN_ACCEPT_DELAY1 and EU868_JOIN_ACCEPT_DELAY2 in the …/src/mac/region/RegionEU868.h file to 5012 and 6012.
I know that this NOT the way to do it (and the stack still don’t work), and I see that Semtech’s “Porting Guide” write about “RX Window Calculation” and “RX Window Calibration” (http://stackforce.github.io/LoRaMac-doc/_p_o_r_t_i_n_g__g_u_i_d_e.html), although it is just mentioned in a few lines, and I can not understand how to use it.
How should I do this Calculation and Calibration?
And How/Where to Measure It?
I need a method to verify that the timing is correct, so I can get it to work, or look for the errors somewhere else.
As far as I can understand this is the most critical part of the LoRaWan stack, and it is just mentioned on a few lines in the guide…
What is going on in the “code snippet” that “shows how to update those values during runtime:” ???
And How To use it?
I would be very grateful for the guidance.
By the way, I define “from scratch” to be using Semtech’s “Reference implementation” in such a way that my porting is up to date with the “Reference implementation” all the time.
I have also been looking at “LoRa Basics MAC” (https://lora-developers.semtech.com/resources/tools/basic-mac/welcome-basic-mac/), but I found their “build system” to be very hard to follow, with no documentaton. So I do not like the idea of start porting this to nRF52832. But when I read about it, this might be the best implementation to use.
Blip a GPIO when you enter/exit transmit and receive, and measure it with a digital scope or logic analyzer.
Also consider putting another probe on the gateway’s transmit LED. If you trigger off the node, extraneous gateway operations talking to other nodes may not be too much of an issue, as you’d be looking for the transmission that (almost) lines up or corresponds to the correct amount of time after the end of the transmit packet.
Also make sure that the width of the transmission on your GPIO accurately matches the calculated packet time, if not there may be an issue with end-of-transmission detection.
Do you have the BLE soft device linked in? If so you may need to think about interactions of that timing-wise.
I have run into usb problems with other devices and it seems to me the problem is on the PC end because other computers work (for a while). It might be voodoo, but one thing that seemed to help was to plug in a UNO clone, start up the IDE, pick the correct board & port, check it with the blink code, then shut everything down and restart with the problem device.
@haresfur The problem is two fold, when the processor contains an embedded USB peripheral then during reset or sleep, the embedded peripheral stops talking on the USB bus. As a consequence, the hosts sees the loss of connectivity which can crash the application that was communicating with the device. The reset case is worse because the micro-controller comes out of reset and enables the USB bootloader, it then passes control to the application. When this happens the USB descriptors on the processor all change. Any connection is obviously disrupted and the impact on the host depends both on the application and the USB stack on the host.
The solution to this problem is straight forward, avoid if at all possible using the embedded USB peripheral of the micro-controller is you plan to put the processor to sleep. Instead add an external USB controller. This peripheral remain active while the processor is sleeping. This controller can be powered by the USB bus (and only the USB bus). In this way is no host id connected, the USB controller does not draw current. The USB controller will be active, and remain active, while powered by the host when connected to the host irrespective of the sleep / reset state of the micro-controller.
Interesting. I don’t really understand the setup you suggest, could you provide more detail? How is the external USB controller connected to the device and how are the data transmitted to/from it? Is there one that you suggest for this? Thanks.
Here is a example of one of the USB to serial modules. You connect the serial interface of this module to a serial port on your micro-controller. From your micro-controller’s perspective, you communicate via the serial port instead of the embedded USB port of the micro-controller.