Data Access Layer #161
aah's goal of Data Access Layer as follows-
I will do my analysis first then planned out implementation part.
Data Access scope is vast, I'm gonna go by step by step
The text was updated successfully, but these errors were encountered:
@blagodus Thank you for your inputs, I will surely look into it. It seems in the initial write up my goals didn't come out properly. I have just update it.
Answer to your question, I have not yet decided the approach whether new package or integrating existing. I will be doing my analysis inline with goals of this implementation.
Once analysis is done, I would publish my direction on this thread.
You might have a look on the awesome CockroachDB. It is written in go, incredible fast, made for scaleability and it can be connected thru postgres-adapters. JSON types makes it interesting for NoSQL users too.
The database layer is a big topic in my eyes. Most frameworks struggle on that part or even touch it.
Multiple Connection Handling
As write operations are the heavy impacts for any database, a solution for high traffic sites is to have a pure read and a read/write connection to two different servers that are handling the sync on their own.
Permanent and Temporary Connections
The number of db connections is most time limited, so a permanent connection seems great, but is not in any case. A developer should have the freedom of choice to use what he needs.
I really like to have a structured model for each db table I use. In projects you need a lot of tables, you get quick bored creating them. The helping methods I know of, are creating thru command line scripts, auto migration like in gorm, or extracting from the database's "SHOW TABLE".
Topic based Migrations
When it comes to migrations the most seen solution is a time based one with up/down scripts. In my opinion, this is not really practicable as when developing in a team on a bigger project not every one is focused on the same tables, Soon there are a lot of up and downs you have to do if you want to revert one manipulation. A table based versioning reflecting the table relations would be nicer.
Multiple Query Cache
While most databases have a query cache, table joins are sometimes not the most perfomant solution. So you hack your data needs with serveral querys together. Putting that in a cache can speed up the app immense. So if it comes to a query builder, some kind of transaction block which aim is not the data integrity but the caching of the output would be nice to have.
That's not a complete list, just what came to my mind at the moment.
I have updated with plan and pulled out from initial and adding it here for reference:
Following is initial leg work on RDBMS findings-
I have done my homework (read documentation, source code) around many data access layer libraries exists now. It turns out following is the viable candidates for integration as data access layer with aah.
Over the past week I researched more about this topic, as I really need a better solution that are yet out there. My focus is a sql database (either cockroach with postgress driver or percona server with mysql driver), while keystores in memory like redis or bolt db are still in mind for heavy perfomance uses.
So here are some more thoughs for the data acess layer, i came accross:
Transactions and prepared statements can be a pitfall
Row Identifiers and unique id's
Controller, model and data access layer dependencies