You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
(it's a little premature to put in this issue now, considering the backend branch isn't even in PR yet, much less merged, but since I'm pretty sure I'm going down the ORM route for now, I figure I should put this in before I lose the thread)
As per SQLAlchemy's documentation, ORM operations are slow compared to SQLAlchemy's Expression Language (or executing the SQL directly), so when it comes time to do performance tuning (0.9 milestone), this might be an avenue to pursue. Note that I can still define my tables using the "declarative" idioms--the underlying tables can be accessed via the __table__ attribute.
On the other hand (and the reason I'm not just starting off writing the backend using Core), 12s for 100k rows doesn't actually sound that bad--if I could parse a million logs in a couple of minutes, that would be an enormous improvement on what I have now. Basically what I'm saying is that even doing things the slow ORM way, there's a good chance this won't be a significant bottleneck.
Bottom line: this is something to keep track of, but 50/50 this gets closed as "won't fix"
The text was updated successfully, but these errors were encountered:
(it's a little premature to put in this issue now, considering the
backend
branch isn't even in PR yet, much less merged, but since I'm pretty sure I'm going down the ORM route for now, I figure I should put this in before I lose the thread)As per SQLAlchemy's documentation, ORM operations are slow compared to SQLAlchemy's Expression Language (or executing the SQL directly), so when it comes time to do performance tuning (0.9 milestone), this might be an avenue to pursue. Note that I can still define my tables using the "declarative" idioms--the underlying tables can be accessed via the
__table__
attribute.On the other hand (and the reason I'm not just starting off writing the backend using Core), 12s for 100k rows doesn't actually sound that bad--if I could parse a million logs in a couple of minutes, that would be an enormous improvement on what I have now. Basically what I'm saying is that even doing things the slow ORM way, there's a good chance this won't be a significant bottleneck.
Bottom line: this is something to keep track of, but 50/50 this gets closed as "won't fix"
The text was updated successfully, but these errors were encountered: