I made a chat filter for iOS requirements which I had a question about that separately but first, I made the chat filter by creating a bunch of if conditions when a message is sent. when a message is sent to create a communication and if that input contains a certain word, it triggers an action. It works well. The only problem is it’s now made the app a little bit more laggy when you chat. I was wondering if anybody had a suggestion of how to make that faster?
As far as I know, I don’t think there’s a possibility of doing a filter comparison between two different collections, right?
2nd, anybody who has already gotten approved with the iOS App Store system, it says that you need to filter bad words. I’m wondering what exactly is the list of bad words there’s certain words that I’m putting on there definitely but as far as certain words like, s*** f***, the app is for adults, but I don’t mind certain curse words anybody have any experience with this ?
It works and I already did the blocking etc, everything links as it should according to my testing. I just was curious if anyone else has done a less laggy chat filter, and if we need to filter all curse words??
The “laggyness” is expected as the text block is being searched for the blocked phrases. This takes time.
I did not know that iOS requires “expletive” blocking in chat. I am building a tutoring app with a student-tutor chat feature. I doubt it will be expletive-ridden. I need to check with Apple docs
One of the options to filter messages is to use AI-based service to evaluate the content. It’ll look like: when user sends a message, you (app) send it to the AI service and get a response if the message contains harmful content; if all is fine → message is processed further, if not → you (app) block the message.
Most of major AI providers have this (Google, OpenAI, Azure, …). You may set this up from Adalo app directly or use something like Make to process the query/response.
However, in my personal opinion there is no absolute need for content filtering in chats. As they are usually person-to-person or closed group, I believe that mechanism of reporting abusive content + user blocking should be enough. There is no explicit mentioning of chats here: App Review Guidelines - Apple Developer, guidelines are more focused on UGC.
In my experience I’ve published several apps with chats and have got no questions from Apple.