-
im just starting to get into the realm of reactive / spa web coding. been playing with svelte the past couple week. ive been thinking about acebase for a while after learning about its features, and finally decided to give acebase a whirl because i had this nagging feeling svelte + acebase will do exactly what ive been thinking 'reactive' coding should be. After banging on it the past couple days, i got a test system that's fully reactive end to end. make a change to the db on the server, and it'll auto proprogate to all the clients and the respective binded values and guis. make a change to a field on the client, and it'll proprogate out to the server, without writing any db code. Insanely cool! I'm expecting to run into limitations with anything beyond simple queries/objects, but still a lot to learn with acebase, so we'll see. But before i keep going down this path, thought i'd ask if there's any expected potential issues with my design/implementation. Basically i have a server and browser client (+ cache). I'm letting that handle all the data syncing so it doesn't disrupt the user and keeps my side very very simple. With svelte, I have a store that's storing a proxy for every single record that needs syncing. I would imagine a power user can have up to 2000 or so various objects. then im tracking any mutation events for every proxy. Some basic tests of 1000 objects seems to work fine so far, but im kinda worried there might be an issue needing to have that many proxies open at a time. also if its deployed to al live server and may run into unexpected bandwidth usage /latency (not sure how the proxy and syncing works under the hood) thoughts? |
Beta Was this translation helpful? Give feedback.
Replies: 2 comments 6 replies
-
Sounds cool! I don't know about your particular use case, but do you really need thousands of proxies? If you want to monitor and sync an entire collection of thousands of records, you could just use 1 single proxy on a parent collection? The only hard limit for live data proxies I can think of is memory - if you have too much data in the proxy you'll run out of it. Regarding bandwidth used, that depends on your configuration. If you are using a local cache db and the server has transaction logging enabled, a live data proxy will download all data from the server once, cache it, and then only transfer object mutations back and forth ever after. (Unless last sync was too long ago, then it will fetch fresh data again after sending any local mutations done offline). Without transaction logging enabled on the server, the proxy will sync local (offline) changes and retrieve fresh data from the server every time it is created. You won't have to worry about latency, updates are only as big as the actual mutations (previous / new value pairs for changed properties). More proxies won't result in much higher latency. |
Beta Was this translation helpful? Give feedback.
-
ahh yes that was me not understanding how things worked in acebase yet, heh. When I realized we can proxy an entire collection, simply tying a svelte store to an proxied array was a very clean solution. Unfortunately any type of sorting will modify the entire array so that was inefficient. Svelte stores like arrays much more than a collection of json objects, and acebase works better with collections. So i tried a few things and was able to proxy a collection once, then drop in each object in the svelte store and manipulate it from there. that seemed to work great! was able to manipulate the array without triggering any mutations, but any changes to indivdual object will still propogate out. Next step is to try more complex queries and joins so we'll see how that goes. es thanks for the tip about enabling transaction logging, you just saved me a few hours. That was EXACTLY how i envisioned it to work! Acebase has been pretty awesome, great work man. There's definitely a baas business here. I imagine whats capable here to be the next evolution. Real end-to-end reactivity and being able to completely abstract away the database is pretty wicked. |
Beta Was this translation helpful? Give feedback.
Sounds cool! I don't know about your particular use case, but do you really need thousands of proxies? If you want to monitor and sync an entire collection of thousands of records, you could just use 1 single proxy on a parent collection? The only hard limit for live data proxies I can think of is memory - if you have too much data in the proxy you'll run out of it. Regarding bandwidth used, that depends on your configuration. If you are using a local cache db and the server has transaction logging enabled, a live data proxy will download all data from the server once, cache it, and then only transfer object mutations back and forth ever after. (Unless last sync was too long ago, then it …