Performance issue - Limit on number of rows

I am struggling with a performance issue.
I have a relatively large dataset in GSheet (40 columns x 8,000 rows) and I have been trying multiple times to use it in Adalo with no success.

Have tried both to use the External collection (I am using Sheety to convert GSheet in an API) and also tried to import it as a local DB in Adalo but the system keeps stalling.

Do you know if there is an official size limit in terms of number of rows? I couldn’t find any details in the documentation.

Thanks

Max

Hi Mdelvita,

Welcome to the Community! :partying_face:

8,000 rows is a lot of data :scream: ! + 40 rows!

I would suggest that you submit a support ticket & ask about the limit if there is one.

If 8000 rows are considered a lot, then I’ll have a huge problem once I add my suppliers’ product list; I have 2 lists with ~18,000 & 13,000 products.

The current service I’m using - GlideApps - is handing it without any performance issues.

I’d like to know if there is a theoretical maximum performance limit before I continue. If there is, then Adalo is not the tool for us.

Yes, same here. I have build my first app in Glideapp and it handles it beautifully but unfortunately doesn’t allow me to publish to the App stores.

But if Adalo cannot handle this traffic it will be very limited in terms of business applications

Max

1 Like

Same reason I’m here. In order to publish to the app store.

Hi @mdelvita, @Torrsy, @James_App_Maker,

I guess there are 3 questions mixed in one topic.

First is reg. Sheety integration. Out of curiosity, I’ve tested this with Google Sheet of 10K records (autogenerated), 5 rows (ID, Name, Surname, City, Phone). I’ve successfully integrated this with Sheety and was able to add it as an external collection. See the sceenshots how I display the list with filter by Name:


Sheety has very restrictive limits on the request quantity on free tier, so I didn’t test it extensively. But integration is possible, so may be if you could share more info about the problem you face, someone can give an advice.

As for the 2nd question, reg. CSV import to Adalo DB - unfortunately, this is a known limitation. Usually it is not recommended to upload more than 3K records at one time. I think that this may be related to API timeouts; for the same collection as above, when I uploaded it to Adalo, it uploaded about 7,5K records until it stopped. So, general advice here is to break one CSV into several, for example, 1K records each, and try to upload them one-by-one.

And the for general questions reg. data handling: Adalo can handle large collections (the maximum one I’ve seen is 100K records), but working with such collections require some planning.
Based on my experience, I’d avoid displaying a collection of 10K records as an unfiltered list, and always try to “break down” the whole data set into a smaller parts. For example, for a product list mentioned by @Torrsy it could be a good idea to have it always filtered by category.
Also, for built-in collections there is a possibility to enable the setting “load and display only visible items”, this may help in showing large lists.

By the way, Adalo has introduced DB Indexing for larger apps, so this also might improve the performance.

Best,
Victor

3 Likes

Thank you Victor.

Coming to your points:

Sheety
I am using a paid version of Sheety. I can see the payload in sheety and the data is there.
I can also add a list in Adalo but when I load the preview or publish it, it just gets stuck and data takes ages to lead (we are talking minutes with the memory of the mac skyrocketing). Basically not usable.
You can see the payload here (smaller version with fewer columns, same problem): https://api.sheety.co/2f4ca30cb5507f16731bc97f9ba446e9/sakeMaster/labelsAdalo

I am based in Singapore so not sure if maybe it’s simply because the CDN is not optimized for this part of the world.

I’ll definitely try to break down the list into smaller chunks and try to import them. This technically should improve performances as I assume the app would load the DB in the cache. I Will keep you updated.

Max

Hi @mdelvita,

Ok, got your points.
In my case, I explicitly put the filter in the External Collection list (filter[Name] equal to Input): as there is a 100 rows limit in free version, I decided not to exceed it with broad request. So that’s why it worked quickly :slight_smile:

So most probably you’re facing the issue when Adalo previewer tries to download all 8K records. May be it is speed of API + location + data volume, which causes this delays.

As a side note - when I experimented with this 10K records’ collection, I put it to Adalo once. When I tried to display the full list, it took approx. 30 secs to load, it was displayed, but then the browser tab just hang. So from this I’ve made a conclusion that it is not a good idea to try to display all records at once in such a big collection :slight_smile:

I think that the best strategy would be to avoid displaying all the list at once, but rather try to always apply some filters.

Best,
Victor

Hi,
Just to point out that adding columns affects DB performance more than rows itself. The more columns you add the less space to query and show rows due to indexation/space/page (cache, memory) in displaying the data.

Can you run some tests reducing the columns but leaving same number of rows? So 10K rows with 5 columns vs 10K rows with 40 columns should perform differently.

1 Like

With internal collection (not Gsheets) I have a collection with 48,000 records (towns in the UK), but I don’t display it without filtering. If I try to display the whole list it would freeze the same as you experienced. With pre-filtering by county/region it performs fine, pretty responsive. So the overall number of records doesn’t seem an issue for me, but i need to filter it first before displaying. This filtered list also responds pretty well to instant text search criteria.

I did what Victor suggested with CSV import and broke it down into manageable chunks for my initial upload.

1 Like

Hi JL, as you can see from my example I have tried a version with 8,000 rows but only 5 columns
https://api.sheety.co/2f4ca30cb5507f16731bc97f9ba446e9/sakeMaster/labelsAdalo

Unfortunately, same results

Max

I also tried to break it down and was able to import 5,000 rows in the local DB but unfortunately I still had the same performance issues when I try to display the list.

I understand what you mean by filtering and I could organise the data by category but ultimately I need to be able to run a search on the entire dataset. This is due to the nature of the dataset where users would not want to “navigate” the data set starting from a category.

I guess every use case is different from a User Experience point of view

Max

Hi @mdelvita,

You have an interesting case. As I understand: you have a list of Sake-s, and you’d like your users to search by name/brewery?

I’ve made a POC based on your data, see the video here: adalo experiments - sheety filtration - YouTube
For the minimum 3 character check, I’ve used @Michael’s component.

But, as I can imagine the user flow, Sheety is not suitable here. The main reason: it can’t filter the data on “Contains” basis. I assume that user should enter “Abe” into the search string, and then the list should display all Sakes which contain “Abe” in the name. I didn’t find the way to do it in Sheety’s API.

I’d recommend having a look at Airtable for this case. Its API returns 100 rows by default, so the list will be loaded quickly. And also you can do a “fuzzy” filtering, like Search(‘Value’, {Field Name}) - see here Filtering Airtable Lists - Adalo Resources.
So as a result your user will get the filtered list of Sakes based on name.

Just my 2 cents )

Best regards, Victor.

2 Likes

Adding onto @Victor you can see how to import google sheets data into Airtable.

1 Like

Awesome Victor. Nice to see a video using our Input Validator component!

1 Like