As a first defence, a gateway should not forward packets that have a bad CRC on LoRa radio modulation level, nor packets that use a non-LoRaWAN sync word. (If it would even see those, assuming the sync word is typically configured in the LoRa receivers, so the gateway’s software would not even see packets that use different sync words.)
Next, even when the CRC does seem good, the packets might still be random noise, like an expert called ‘LoRaTracker’ who posts on here sometimes nicely explained. But after receiving a packet TTN needs both the DevAddr and the MIC to deliver the packet; see How does a network know a received packet is for them? The DevAddr is 32 bits, but due to the systematic assignment maybe not all of those bits account for entropy. The MIC is
typically also 32 bits, and while for LoRaWAN 1.0.x only 16 of those are in the LoRa message, TTN will use the full 32 bits to validate the MIC.
Finally, the application payload is decrypted using the secret AppSKey. But when using an efficient encoding (that is: when not sending text) decrypting a mangled payload (or using the wrong key) would simply yield a different result, which would very likely go undetected. But of course, the MIC has already been validated at this point, and that also covers the encrypted application payload (and the uplink counter).
So, I’d say that this is good enough to assume that if a message is routed to your application, then it’s a message that was created by one of your devices, and has not been changed/mangled during the LoRa transmission, nor while being sent from gateway to network server over the internet. Still then, your application should always be ready to recognize wrong values as caused by broken sensors or plain sabotage. But such values would not be detected by any checksum anyway.
In short: no need for some application-level CRC.