No manual intervention as in no human moderation?.. This seems impractical unless you can put 100% trust in an automated solution. Look at the app Yik Yak: they have implemented OCR and machine learning image recognition to block things like pictures of peoples faces and inappropriate words/graphics, but that only covers 25-50% use cases. Any user on your app that WANTS to exploit your system will find a way around it. That is why even the largest apps (think YouTube) have reporting features that allow users of the community to flag bad content; if you were to do this in Bubble I think that would be sufficient for your needs instead of needing OCR and ML involved (which you could do via API).
Create a flag_count variable on your content/data elements. When people flag content, set an auto-trigger that removes the content temporarily and allows a human to moderate anything flagged according to your specifications (ie: 5 flags/hr, 10 flags total, or just 1 flag period).
I agree this is an important discussion, but not one that I think Bubble should be expected to have a solution for built-in. Human moderation should be expected to some degree, nothing lasts (especially ‘social image’) when you let the ship roam free without intervention.