-
-
Notifications
You must be signed in to change notification settings - Fork 911
Description
So this is mostly from a discussion perspective. I'm not experienced when it comes to C++, but I'm thinking about this discussion of limited scope.
I'm mostly concerned that we might spend too much time initializing objects. Assume I have a large object (say an image) that I want to run an operation on to get some value, and I further run several operations per picture (running different sorts of filters, etc. etc.) each of these operations needing their own copy of the image that they run their operations on. Further, all of this happens several times a second, since it comes from a video stream.
I would have some function treat_frame() working with each frame, and have that function call functions for each operation. Wouldn't limiting the scope of the working objects (having them local to each function) cause constructors and destructors for these quite large objects several times every second? From my perspective it seems wiser to allocate them once outside the scope of the loop, and then have them use the same memory over and over.
Not sure how far this goes though. I did something like the above in a university project, and in profiling it turned out a lot of time was spent in constructors and allocators. Not sure how it would be with something like the example code. Like, would the constructor get invoked every iteration in the below code?
// Good Idea
for (int i = 0; i < 15; ++i)
{
MyObject obj(i);
// do something with obj
}