Hi,
I have about six or seven gateways are being affected by a device that is attempting a Join Request about once every second.
Is there any way I can block or blacklist this device? Apart from the noise/DoS issues, its approximately 3GB per gateway per month of data which gets a little pricey for the 4G gateways.
The Join EUI is 70 B3 D5 7E D0 01 54 83, which is a TTN device, but it never gets a response
How do you know? Registered? Is one of your GW’s also sending the join ack back to the device?..such that we then know it is indeed recognised as a TTN registered device
We usually avoid looking at black list/white list situations due to the open and ‘free’ nature of the community network and limited view/knowledge of others activities, and manifesto guidelines, unless it is an extreme abuse of the community resource (which this one may be). Trying to triangulate via data vs RF!, and identify device (and owner) is usually prefered. This is easier if we can certain it is on TTN, otherwise man your directional antennae!
Or post the details on Slack to see if TTI engineers fancy looking for it - should take them only a few seconds to track down given EUI & frequency of JR.
As for stopping your routers from sending the JR via the backhaul, that would be an item in the menu if there were such a thing in RouterOS which there may well be if you are feeling lucky. The question is, are you feeling lucky?
PS: No point redacting gateway info, it’s public knowledge by a variety of means.
Having the Gateway drop a device is only solving part of the problem, and arguably the smallest part. Basically any black listing scheme will only save the very small amount of server and backhaul processing involved, and as a LNS and ‘t Internet’ are both designed to operate at (massive!) scale the saving is relativey minor.
That is why I alway recommend nail the device/user if at all possible!
The GW will still have to ‘do its thing’ - as will ALL Gw’s in hearing range…on detection of pre-amble/signal the GW internals will kick into gear and schedulers allocate resources to route, store, process and subsequently decode the errant message. That consumes valuable GW resources and opens up an opportunity cost of not then having resources to handle co-incident messages. Only after decode will the GW then have sufficient data to make a black listing decision (Join EUI, Dev EUI or what ever), by which time GW has done 90% of what it would have to do anyhow…then it needs to run an exception/reject process…
Worst of all blacklisting does nothing to stop the waste of arguably the most precious resource of all - the RF spectrum and airtime/capacity and effective short duration increase in noise floor that is associated with the rogue device Tx’ing… Nah; best option find it and kill it (if possible) then everyone benefits: other RF spectrum users, owners of GW’s the full community and all GW’s in range and by extension you also get the backhaul comms and LNS load reduction win! It can be a pain tracking down but has the greatest return…
The horsepower to look at a JR or the NetID in an uplink can be done with a slide rule and the reject process is to just stop.
In the meanwhile whilst the OP pays for the pleasure of sending this JR on a metered connection, or could use the filtering tools that they have yet to discover in RouterOS to make this less painful.
My best RDF adventure was a finding an errant HAB payload - took some time & patience and was under a deadline of battery life. Was somewhat stressful.
Whereas asking TTI to provide details of the device so that the OP can go straight to the user - via email or physically - without making out like WWII operators on the South Downs - would be far more efficient.
Or reporting the situation to the FCC and let them go hunting.
Likely for them (or other local reg) a dont care if under legal d.c. limits or (as this is US/AU/NZ?) under dwell time restrictions.
Also it’s not about the GW horsepower (minor) it’s the sucking up of (LoRa demod etc.) resource…before you can even look at the JR etc., simplisticly you have to have allocated and ‘consumed’ ~1/8th of the (classical 8 channel GW) capacity of the conc card/baseband for the duration of the message processing/decode… CPU cycles taken to scrub is minimal, but rather you stand a chance (in busy traffic areas typically) of missing other (legitimate) messages due to that LoRa silicon resource allocation
I’ll give this a go. I already have a rough idea of the location based on RSSI from multiple gateways. I guess it’s time to break out a Yagi, offset attenuator and TinySA. At least the regular transmissions will make it easier to track down.
Granted it has been doing this every 1-2 seconds for over two months, so no rush to get out there and deal with it now.
True that is part of the TTI MA-S block. The challenge is many were used during TTN V2 and may have been re-used/badly migrated to V3, or simply orphaned. Or if a developer used to test on TTN or a TTI discovery instance (say) before moving to their own instance or another network just taking the device credentials with them - hence wondering if by chance you also saw the Join-Ack through any of your GW’s (that would confirm TTN/TTI still thinks its a valid registered device). As Nick suggests a Slack (#support?) post to TTI core team flagging the device may elicit some results from current or historic user database. As Nick also suggests getting out a Yagi to track solo can be painful - often easier if a group/community effort can be organised (if you have help around you!)
Indeed where manufacturers often provide details/credentials of their own for setting up devices in a LWN network many hobby/small-scale developers have been known to hop onto TTN or a TTI instance, get creds generated then move to private deployments! (as above)
p.s. BTW how have your self build ODU’s been behaving? Must be 6 years since we last discussed these (old Laird’s re-housed?, etc.). One of the ones I showed you lasted aro 3-3.5 years before the MCU/Comms control module died (not sure why) and wasnt able to fix so swapped out - others going strong