I made a chat filter for iOS requirements which I had a question about that separately but first, I made the chat filter by creating a bunch of if conditions when a message is sent. when a message is sent to create a communication and if that input contains a certain word, it triggers an action. It works well. The only problem is it’s now made the app a little bit more laggy when you chat. I was wondering if anybody had a suggestion of how to make that faster?
As far as I know, I don’t think there’s a possibility of doing a filter comparison between two different collections, right?
2nd, anybody who has already gotten approved with the iOS App Store system, it says that you need to filter bad words. I’m wondering what exactly is the list of bad words there’s certain words that I’m putting on there definitely but as far as certain words like, s*** f***, the app is for adults, but I don’t mind certain curse words anybody have any experience with this ?
we have seen similar approval delays for several of our clients at Impero IT Services, when submitting apps built on no-code or low-code platforms like Adalo.
That might help:
Content Flags
User-generated content
Links to external websites
Placeholder or test content
Non-functional buttons or pages
We’ve had an app get stuck in review for over a week just because a “Contact Us” page didn’t function as expected on one iOS device.
App Review Notes
When submitting the app for review, always use the Notes to Reviewer section to explain:
What the app does
Why it needs access to specific features (location, camera, etc.)
Any dummy accounts they can use to log in
This small addition has helped us speed up approvals, especially when dealing with more complex app flows.
Custom Branding or Template-Based Look?
Apple can sometimes flag apps that look too similar to templates (which is often the case with no-code builders unless you customize heavily). Try tweaking layout, colors & copy so it feels more unique.
What Worked for Us
A client’s Adalo app was stuck for 6+ days. We followed up through App Store Connect → Contact App Review, explained the app’s use case and urgency & got a response within 24 hours. So do not hesitate to politely nudge them if the wait crosses 5 business days.
It works and I already did the blocking etc, everything links as it should according to my testing. I just was curious if anyone else has done a less laggy chat filter, and if we need to filter all curse words??
The “laggyness” is expected as the text block is being searched for the blocked phrases. This takes time.
I did not know that iOS requires “expletive” blocking in chat. I am building a tutoring app with a student-tutor chat feature. I doubt it will be expletive-ridden. I need to check with Apple docs
Yes, We built a peer-support chat app where moderation was a key feature. We:
Used a local keyword matcher built with Trie + regex for mobile
Applied a second layer on Firebase Functions to validate and log messages
Allowed admins to add/remove words in real-time
Not always. Apple’s guidelines (and Google Play’s) focus more on offensive, discriminatory, or explicit language, if the app is accessible to younger audiences
We typically:
Start with a core blacklist (swear words, hate speech, slurs)
Allow clients to customize or extend the list based on audience or feedback
Use asterisk-masking (Y***, Z***), which is often acceptable for light profanity unless the app is rated 4+ or for kids
One of the options to filter messages is to use AI-based service to evaluate the content. It’ll look like: when user sends a message, you (app) send it to the AI service and get a response if the message contains harmful content; if all is fine → message is processed further, if not → you (app) block the message.
Most of major AI providers have this (Google, OpenAI, Azure, …). You may set this up from Adalo app directly or use something like Make to process the query/response.
However, in my personal opinion there is no absolute need for content filtering in chats. As they are usually person-to-person or closed group, I believe that mechanism of reporting abusive content + user blocking should be enough. There is no explicit mentioning of chats here: App Review Guidelines - Apple Developer, guidelines are more focused on UGC.
In my experience I’ve published several apps with chats and have got no questions from Apple.