Infochimps merge #129

Closed
wants to merge 93 commits into
from

Conversation

Projects
None yet

Marsup commented Jul 25, 2013

As discussed in #123, here is a full merge from infochimps-labs/cube@master to the current master.

Considering the size of the merge, I hope to involve more people into the review so that we don't miss any critical part, even though the tests continue to pass (minus the one already failing).

I hope especially people from @infochimps-labs can give their feedback since they know their codebase much more than I do.

Marsup and others added some commits Aug 10, 2012

@Marsup Marsup Upgrade mongodb driver and use native BSON parser af74865
Philip (flip) Kromer improved logging and meta-cubifying of progress reports.
metalog:
* metalog.info('foo', {...}) for progress recording -- sent to log by default
* metalog.minor('foo', {...}) for verbose recording -- sent nowhere by default
* metalog.event('foo', {...}) cubifies the event and sends it to info
  - metalog.event('foo', {...}, 'silent') to cubify but not log
* retargetable:
  - metalog.loggers.minor = metalog.log to log minor events
  - metalog.loggers.info  = metalog.silent to quash logging

also separated db test helpers into own file and added fixture ability.
e0d6061
Philip (flip) Kromer Added authentication: allow_all, read_only, or mongo_cookie
* authentication.authenticator(name) gives you the requested authenticator
  - server.js uses options['authenticator']
  - 'allow_all' is default in bin/collector-config.js etc.

* authenticator.check(request, auth_ok, auth_no)
  - calls auth_ok if authenticated, auth_no if rejected
  - staples an 'authorized' member to the request object:
    - eg we use 'request.authorized.admin' to govern board editing
    - in the mongo_cookie authenticator, it's the user record

* mongo_cookie authenticator compares bcrypted cookie to a stored hashed secret
  - you must set the cookie and store that db record in your host client; see
    'test/authenticator-test.js' for format. Rails+devise snippet available on request.
b2e9ece
Philip (flip) Kromer applied metalog to rest of code, added comments on major plot lines 8b57375
Philip (flip) Kromer Merge branch 'pr2_authentication'
* pr2_authentication:
  Added authentication: allow_all, read_only, or mongo_cookie
59ec553
Philip (flip) Kromer Log evaluator queries when in verbose mode 414d234
Philip (flip) Kromer re-adding visualizer 9889b33
Philip (flip) Kromer Save metrics with zero-counts back to the DB 29b5011
Philip (flip) Kromer Merge remote-tracking branch 'marsup/mongodb-native-parser' into working
* marsup/mongodb-native-parser:
  Upgrade mongodb driver and use native BSON parser

Conflicts:
	package.json
0e369cd
Philip (flip) Kromer vows no longer stalls trying to run test.js
moved test.js to test_helper.js, renamed it in use.
06ac46d
Philip (flip) Kromer Fixed the dancing heisen-fails in test/metric-test
Some mixture of these changes and the upgraded mongo npm package seem to have made the dancing sometimes-failures in test/metrics-test.js go away.
Untangled and commented the code, so if you are still seeing those bugs here maybe is a better stepping-off point.
46166ab
Philip (flip) Kromer graceful shutdown of server and flushers efd3f51
Philip (flip) Kromer richer tests for server components; test helpers for setting up, star…
…ting server.

* test_helper.with_server -- starts the server, run tests once server starts, stops server when tests are done
* merged test/test_db.js into test/test_helper.js, documentated it good.
7435bff
Philip (flip) Kromer travis.yml e608edc
Philip (flip) Kromer added test 'script' to package.json, for make travis-ci happy f64cdb7
Philip (flip) Kromer trying to make travis-ci happy 262bfa3
Philip (flip) Kromer More tests around server, including UDP, HTTP and static file. ab04068
@hustonhoburg hustonhoburg Temporarily disable complex metric expressions 8ca2235
@hustonhoburg hustonhoburg Move configs into single config script e0083f1
@hustonhoburg hustonhoburg Add mongo server config to cube config 4770ac2
@hustonhoburg hustonhoburg Events and metrics collections are created from config file be66b98
@hustonhoburg hustonhoburg Configurable mongo connection 3645464
@hustonhoburg hustonhoburg Add horizons for calculations and invalidations when processing event…
…s and metrics
aa4fafd
@hustonhoburg hustonhoburg Cascade tiers down to 10 seconds 730ffe1
@hustonhoburg hustonhoburg Add warmer to refresh metrics e592724
@hustonhoburg hustonhoburg Remove paraenthesis limitation 957222f
Philip (flip) Kromer use strict mode everywhere 402f68b
Philip (flip) Kromer doing scale experiments, so reverting the workaround-y stuff (no dist…
…inct, all tiers cascade to tensec, horizons)
740d26c
Philip (flip) Kromer Extracting metric and event code into objects (WIP) 4581841
Philip (flip) Kromer adding underscore lib 2635db0
Philip (flip) Kromer separated invalidation logic from putter (preparation for implementat…
…ing calculator queue)
7881f54
Philip (flip) Kromer JSLinted files, groomed away the lint 3b79995
Philip (flip) Kromer DB shouldn't reconnect in test suite 865f340
Philip (flip) Kromer Reworking metric code -- objectified measurements into separate metho…
…ds. WIP
6578fe2
Philip (flip) Kromer Made lint shut up about 'export' keyword 50f759a
Philip (flip) Kromer added a 'spy' helper to metalog -- wraps a callback, shows the call a…
…nd args on the way back
b51881a
Philip (flip) Kromer Added mbox queue, to better schedule queries under load
* the Broker lets you defer tasks into the queue under a chosen mailbox name.
* the Broker has Workers that handle deferred jobs. Workers each handle a non-overlapping set of mailboxes.
* when a worker is ready, it fires off the task with lowest mbox name, attaching its own callback. The task is responsible for pinging back the supplied proxy callback
* The proxy callback will re-issue the call to all the registered on_complete callbacks.

Unfortunately, this doesn't yet solve the problem -- but it's a spike in the ground
84c47e4
@hustonhoburg hustonhoburg Extract database interactions into separate file. Add config to allow…
… splitting metrics and event into separate databases
4151aab
Philip (flip) Kromer cleaned up broker and metrics in advance of merging new DB code f6967f5
Philip (flip) Kromer adding tracing and queuing (WIP) c3a5870
Philip (flip) Kromer Added a trace system -- tracks internal progress through whole call s…
…tack. Moving things over to @mbostock's async-queue
81086f0
Philip (flip) Kromer getting tests to run non-isolated f1c9715
@hustonhoburg hustonhoburg Misc bug fixes e2be0b1
@hustonhoburg hustonhoburg Fix stuff 1112467
@hustonhoburg hustonhoburg Add warmer test 5fe8844
@hustonhoburg hustonhoburg Update random streamer example to be real time a1b7107
@hustonhoburg hustonhoburg Stop random streamer example from batching f453426
@hustonhoburg hustonhoburg Update streamer, remove tracing from events f6cfbc8
@hustonhoburg hustonhoburg Events include their bins when they are saved. Tiers cascade down to …
…10 seconds
a12ec7f
@hustonhoburg hustonhoburg Add array and object types to random (cromulator) example 3e6bb52
@hustonhoburg hustonhoburg Modelize event, metric, and measurement objects c629714
@hustonhoburg hustonhoburg Add semi pyramidal calculations for non-pyramidal reduces 8e8971e
@hustonhoburg hustonhoburg Prepend numeric event data keys with 'k'
Conflicts:

	lib/cube/models.js
bef5e48
@hustonhoburg hustonhoburg Add horizons back in, complete with tests
Conflicts:

	lib/cube/db.js
	lib/cube/event.js
	lib/cube/metric.js
36c78fd
@hustonhoburg hustonhoburg Fix metrics bugs. 48b0060
@hustonhoburg hustonhoburg Send parse errors back to user for malformed queries 9e77305
@hustonhoburg hustonhoburg Fix random-streamer example model references 527d7e0
@hustonhoburg hustonhoburg Remove commented out code sections 1f6f78d
@hustonhoburg hustonhoburg Modelize invalidator 438efa3
@hustonhoburg hustonhoburg Fix error when running from_wire on row with no vs value b6d3524
@hustonhoburg hustonhoburg Add "jake" task to remove metrics older than the expiration horizon 8208012
@hustonhoburg hustonhoburg Only store intermediate metric values for nonpyramidal reduces 057e392
@hustonhoburg hustonhoburg Oops, metrics weren't saving/cascading correctly 76873e1
@hustonhoburg hustonhoburg Merge branch 'production' c5392aa
@hustonhoburg hustonhoburg Merge branch 'working'
Conflicts:
	lib/cube/metric.js
347600e
@hustonhoburg hustonhoburg Events only need validated once. Event putter and event.save were bot…
…h validating. Now only validate on save.
59ec91d
@hustonhoburg hustonhoburg Add grouping to metric queries. b030102
@hustonhoburg hustonhoburg Board URLs no longer include organization 9218417
@hustonhoburg hustonhoburg Fix bug for warmer for empty boards b8bef17
@Marsup Marsup Merge remote-tracking branch 'infochimps/master' 3628932
Collaborator

RandomEtc commented Jul 29, 2013

Just finished a first pass review of this. Mainly notes to myself but feel free to clarify anything you can:

Things to consider tidying up:

  • queue-async should be in package.json and not a submodule
  • https://secure.travis-ci.org/infochimps-labs/cube.png shouldn't be failing if we're keeping Travis stuff
  • do we need a new travis account for square/cube? https://secure.travis-ci.org/square/cube.png should be green
  • Jakefile and related horizon concept need docs in wiki
  • I think most (all?) of TODO.md is on the wiki - can we remove it?
  • config/cube.js refers to dashpot_development - what's "dashpot"?
  • config/cube "import" stuff seems overly complicated?

New features (to be documented where possible):

  • use strict :)
  • horizons concept
  • warmer
  • visualizer
  • authenticator
  • broker
  • metalogging
  • separate database for events
  • new examples (all seem good - could some of them be tests?)
  • new Db class
  • new Model class for events, metrics, measurements and invalidators
  • HTTP endpoint can now take regexp matches
  • metric deletion/refreshing now possible
  • tier binning (where's that used?)

My thoughts and bits I skimmed:

  • most substantive changes are to the events, metrics, measurements, invalidator models - need to look closer and understand the underlying Model class too - and the broker stuff I think
  • lib/cube/server.js needs a closer read too - quite heavily tied to new model of authentication, which would ideally be more optional
  • a lot of vars_with_underscores, but other cube things are camelCased
  • I think process.env.TZ might have been deprecated, we should look for another solution if so
  • I made some recent improvements to database.js to accept mongo-url in configs - looks like it's no longer used because of db.js, needs a closer look (and we should double-check other deleted require calls)

Next steps:

  • I'd welcome help reviewing/understanding/testing the new model classes
  • I'd like to understand/fix the failing test so we're off to a clean start
  • make sure we don't miss the mongo-url thing and other recent bugfixes
  • perhaps we can do some small merges for parts of this first - I think there are open pull requests for the metalogging and authentication stuff for example - but I am OK to push through on this thread too

Marsup commented Jul 30, 2013

Things to consider tidying up

  • Queue-async can indeed be added to dependencies, I don't know why they kept it as submodule, I tested it with success
  • Travis errors are probably because the tests doesn't check for errors on db.open, can't say for sure, anyway I don't see these errors on my env, maybe a mongodb problem on travis
  • You will need to add your repo to travis yes
  • Sure
  • Wiki is not ideal for this IMHO, I'd rather have tagged issues for each todo item and reference them from commits so that we can trace it
  • Don't know what dashpot is too, other than a branch on their repo :)
  • Agreed, it can be much better, I love cfg package to deal with configuration, what do you think ?

New features (to be documented where possible)

I'm gonna need help to document since I'm a bit busy working on our own merge, but sure, user-facing concepts should be documented, and some bits of code here and there. I dug into the code but don't fully understand all the tiniest details.

My thoughts and bits I skimmed

  • As I understand it, roles and responsibilities are isolated in their respective models, so far I like it, it seems a bit DRYer
  • It kinda follows the middleware pattern you can find in some other modules, I think it's the right place to handle it, and the default (allow_all) doesn't add much to the connection handling process
  • Some jslint/jshint rules should take care of that
  • Just tested in the REPL, it still works on dates, where did you see it was deprecated ?
  • That should probably be backported, didn't see it in the merge, sorry

Next steps

  • I like it so far, even though I don't understand everything, like the binning you mentioned, it seems to me that some properties are only useful for tests
  • That's critical, can't spot the bug so far but I will
  • Sure, noticed any other I might have missed ?
  • The 2 other PRs indeed overlap with this one, but I don't really see the benefit of it since they both don't induce many (if any) conflicts, it's not much help for this merge

Marsup commented Aug 6, 2013

@RandomEtc I have a bit more time to work on this again since I've finished merging our own branch, that might be the subject of another pull request once we're done here :)

So far I have done :

  • queue-async is now an actual dependency
  • cfg is now used to deal with configuration, options are not passed anymore all over the code
  • Backported the mongo-url setting

I'm going to work on the tests passing since that remains an issue.

Marsup commented Aug 6, 2013

OK so that was maybe a false alarm. Rising the timeout of the metric-tests made them all pass, I knew the cpu was very busy but didn't think it would take over 20s to complete a test. Can you confirm everything's good on your side too ?

Collaborator

RandomEtc commented Aug 6, 2013

I'll take a look - thanks!

At first glance I don't like the use of cfg - I prefer seeing the options passed around explicitly. We use Cube as a library here and do a lot of custom config using raw JS objects. Not sure if this is compatible.

Do we need node-gyp? Seems like submodules should be importing it if needed?

Owner

Marsup replied Aug 6, 2013

Indeed, I just forgot to remove it, will do tomorrow.

Marsup commented Aug 6, 2013

Actually we use it as well as a library. I missed some things in the cherry-picking process, like exporting the configuration. I'll complete it tomorrow.

Doing this allows you to do the same thing in your application (tuning every parameter, even with your own per-environment settings, which is a great benefit IMHO). The reason I removed all the options passing is I found errors in the code, and requiring the configuration from a central place is much safer and makes less noise in the functions signatures.

Collaborator

RandomEtc commented Aug 6, 2013

think there's a balance to be struck between using libraries (cfg and per-environment settings are good) and keeping things separate. For example I don't think the test settings should be anywhere except in the tests.

Let me take a look at it - I don't like just critiquing without offering an alternative way - I have picked Cube up again for a side-project and should have some time this week. I'd love to be running this version for real rather than speculating about what works.

Marsup commented Aug 6, 2013

Well that's just an habit, because I like having my settings in a single place, test being an environment among all the others, but that would be as easy to put the settings back in the test_helper.js if you prefer so.
I also usually put it there because then I don't have to worry whether each test file loads the good settings.
I'm pushing an update without node-gyp, and with the test config in the old place.

Collaborator

RandomEtc commented Aug 6, 2013

I'm not in a rush to change the cfg stuff - thanks for keeping moving forward 👍

Marsup commented Aug 6, 2013

You're welcome :)

Just so you know, I modified the git history and forced the push to avoid useless commits, so the above ones are new even if GitHub doesn't think so ;)

I'm not sure I understand you, do you want to keep the old one or the one with cfg ?

Collaborator

RandomEtc commented Aug 6, 2013

I was hoping to minimize your effort! If you undid it for now, thank you. Definitely want to work on config at some point but I'd like to resist loading new features into this branch. I'd rather this be Cube 0.3.0 pretty much as-is, and then push for more cleanups for 0.4.x.

Also hoping to hear from @mrflip and @hustonhoburg - at least a little encouragement, guys? :)

Marsup commented Aug 6, 2013

Well I didn't revert to the old config file, I just moved the tests configuration. I had some bugs with the old way of dealing with options, though I can't remember exactly what those were, probably nothing I can't fix, but it felt right at that moment, bad judgment call maybe :)

For the record, I've been using the cfg version from my fork since 5 days (as a lib), nothing came up.

Do you want some time to think about it or should I remove it altogether ?

Collaborator

RandomEtc commented Aug 6, 2013

Let me look at it. Unless you're suddenly struck with a good idea :)

Marsup commented Aug 8, 2013

Moving on to the next problem, I'd really like some hints from @mrflip or @hustonhoburg to understand why the events bins are stored (to_wire function) into MongoDB. I can't find any query containing this field, and it is computed at runtime when the events are fetched anyway.

Marsup commented Aug 9, 2013

@mrflip You eliminated the use of bisect in infochimps-labs/cube@7881f54, do you think the problem that it fixed no longer exists in your implementation ?

Thanks so much for taking a stab at this. We've had less time than we'd anticipated to pursue this project.

Keep in mind, our modifications were written as a work in progress that likely could use a little cleaning up for a final public release.

If I recall correctly, storing "bins" was part of a possible optimization that never really came to fruition. The intent was to reduce the runtime computations and allow for fetching events by tier and time instead of using range selectors. I think we may have experimented with looping over each bin and fetching only its events as opposed to fetching whole time ranges for calculations.

I can't remember why we stopped using bisect. It looks like the original intent was to maintain queued invalidations tiers/times in sorted order. At first glance, I don't see a reason to do this though. The code is used by an "$in" mongo query, so I'm not sure why the order would matter unless it's an optimization for mongo. If that's not the case, it seems like an unnecessary calculation.

Marsup commented Aug 9, 2013

It's fairly stable for a WIP, I've been letting a slightly modified version run on my laptop for at least a week, looks good, but of course it's not under a lot of pressure :)

For the bins, I see what you mean, but I'm not sure it would be a good thing. The timestamp is already indexed so the query should be lightning fast. If I'm not missing anything, it will cost you additional space and index, without any added value.

I agree with you on the bisect, the $in doesn't care about order, so I don't see the point. The original commit containing the code was 234d6e6, no indication whatsoever. Maybe @mbostock could help on this one ?

Collaborator

RandomEtc commented Sep 8, 2013

Am attempting to rebase this against current master. Wish me luck :)

Collaborator

RandomEtc commented Sep 8, 2013

I went with a merge of square/cube master into Marsup/cube infochimps-merge, which was relatively easy. I also remove the (unused) broker and test, and tidied up usage of process.nextTick. This is pushed to square/cube infochimps-merge.

I am having trouble running the tests without vows -i again, which I'd like to get fixed. Let's try to find the last time the tests passed without -i – I assume they passed after f1c9715 at least?

I think we should remove the unused bins stuff that you pointed out. Is there anything else in here that seems half-done?

Biggest sticking point for me now, apart from the tests and general cleanup, is that I am also not a huge fan of the code style of anything that uses Model here. I'm trying to figure out if I can quickly rewrite it in a js style more like the original Cube code - no new, way less this, etc. I'm willing to skip it and press on but if it's just a mechanical transformation I might do it in the name of consistency. (Otherwise consistency will be very hard to maintain after the merge).

Marsup commented Sep 8, 2013

I'm happy that went well :)

I'm not sure having the -i is such an issue. Sure the tests would be faster without it, but datasets of tests might interfere with one another which might create false-positives as well as false-negatives. Personally never liked vows on this point. We could start at the commit you mentioned and git bisect until we find something.

Apart from the bins, I found some bugs that should probably be backported before any release is done, I'll try to make it happen this coming week.

I like the OOP approach they took, it's easier to reason about when there's a single place for concerns. Is it the implementation that bothers you or the principle ?

Collaborator

RandomEtc commented Sep 9, 2013

Cool, let's keep this thread moving, I think it will be worthwhile. I'll try to get back to you more quickly with my take on anything. Would be good to publish a 0.3.0 release candidate to npm and let a few other people try it out.

Let me know if you need a hand with the bug-fixing.

I certainly don't dislike oop, or the specific oop approach taken, but I dislike having two competing systems in one code-base. I'd prefer to keep Cube using mbostock-style (or Crockford-style) modules and classes if possible.

Collaborator

RandomEtc commented Sep 9, 2013

I'm OK with -i for tests too. I assume we can make that work with Travis.

Some other TODOs:

  • add -i to Makefile and package.json definition for tests
  • remove bins stuff
  • backported bug-fixes from @Marsup
  • consolidate test_db with database.js
  • remove TODO.md (and double-check that all the useful stuff is already on the wiki)
  • documentation for new features I listed above
  • get Travis set up for square/cube
  • publish release-candidate for testing

Anything else?

Marsup commented Sep 9, 2013

OK I'll see what you come up with :)

Here is a first batch of fixes, our branch have some major differences so I hope I didn't miss any.
I'll detail a few that seems important :

  • c0b8972 : I don't understand how this bug could make it so far in the project, I guess nobody ever tried to send more than one event in the same request, you might want to backport it in the stable branch right now.
  • ad9a0d3 : I am working on authentication-enabled mongodbs and this was not working with separated databases, it should do now.
  • 137f278 : this allows a pyramid to be built even when the result is there's no value.
  • 44b16fd : group based queries would just hang if you asked for an interval where there's absolutely nothing, now we provide answers for each time slot with a null group.
Collaborator

RandomEtc commented on c0b8972 Sep 9, 2013

These seem equivalent, was there a subtle issue? Does putter incorrectly use the second and third arguments passed by forEach?

Collaborator

mbostock replied Sep 9, 2013

The putter takes a callback as a second argument, and uses a noop callback if no callback is specified. So using array.forEach passes the current array index as the second argument, causing the putter to try to invoke the index (a number) as a callback, which fails. I think in earlier versions of the code, the callback was only invoked if there was an error, so this bug wasn’t noticed.

Fixing stuff like this seems separate to the infochimps merge. Looks reasonable but it might have unintended side-effects - this is the sort of fix I'd like to see a new test for, to make sure the behavior is properly understood.

Owner

Marsup replied Sep 9, 2013

Infochimps added the comparison to 0 which was right, this allowed the storage of .empty values from reducers. This is to allow other reducers that don't have empty values to be stored as well. This seems to be in the continuity of their work.

Owner

Marsup replied Sep 9, 2013

Oh and about the test, they are all about testing cube's api, none of them tests what cube actually stores in the metrics cache, which is the issue here, should we modify the test suite to check this ?

The old check for metric.value || metric.value == 0 seems like it was designed to deliberately exclude undefined – can you explain what's going on here? Is it just an optimization to prevent recalculating empty metrics?

Harmless, but do you have a benchmark for this?

Again I don't see how this is related to the infochimps merge and would prefer to take these fixes separately.

Owner

Marsup replied Sep 9, 2013

Agreed, it was just a minor diff between our branches so I thought it was worth including it. No benchmark specific to cube, I worked on the same exact issue on lazy, concatenating string resulted in a huge (and linear) performance loss.

(Sorry for continued negative comments)

I was hoping to keep this around - there's a lot of duplicated connection logic between tests and db.js. Also I would have liked to keep the comments intact. I'll see if I can fix it.

Owner

Marsup replied Sep 9, 2013

This logic is completely kept in database.js, but your argument about comments is still valid yes.

Can you clarify: (a) what was broken before and (b) how it's fixed now?

Owner

Marsup replied Sep 9, 2013

a) make a request with a .group() where there's no data (or no group on the field), you will never get an answer.
b) it is fixed by "forging" metrics, just like we did without grouping, with the exception that this metric will have a null group

This has a side-effect if one of your group was an empty string or a 0, since javascript consider it falsy, it wouldn't even send you this group, which is a bug IMHO.

👍

I'm not seeing a diff here - anything substantial changed? Just formatting changes from the new compiler?

Did metric-expression.js change as well?

Owner

Marsup replied Sep 9, 2013

That's just the result of me using the makefile, which built the same metric-expression.peg but with a peg 0.7. Doesn't seem to have any impact on the result.

Collaborator

RandomEtc commented Sep 9, 2013

I didn't see your full-merge branch before, sorry. Are we up to date now?

I left some comments on the individual commits. Definitely progress, but I would like to stay focused on merging existing infochimps functionality before adding too many more fixes. I'd like to create a 0.3.0 branch from this soon and we can work on extra bug fixes and style enhancements there.

Marsup commented Sep 9, 2013

I've cherry-picked all the bug-related code (which seems debatable 😄) from the other branch yes.

I think it's worth discussing those because they seem important for a proper release, even though it's hard to qualify what is considered a bug and what is not. I included everything that kept cube from working properly, or sometimes at all.

If you think some of them go beyond the goal, leave me the SHA and I'll rewrite the history.

Collaborator

RandomEtc commented Sep 9, 2013

As with cfg, don't worry about undoing it. But it may need more massaging to make sure people understand the changes. Thanks again for the contributions.

Next for me is testing this in our staging environment at Square and seeing what that shakes out.

Marsup commented Sep 9, 2013

I actually wondered a lot about cfg, we could probably make it work without, as long as we keep a central point of configuration, but I'm not comfortable exposing the raw object. Want me to try another solution ?

Collaborator

RandomEtc commented Sep 10, 2013

Regarding cfg my main goal would be to pass raw objects into Cube interfaces, so that people can use JSON or YAML or whatever they want for config. The cfg approach as it stands is hard to follow and relies on side-effects.

A good compromise might be to limit require('cfg') to things in the bin directory. At that point "configuration" stops and "instantiation" begins. This aligns with the goal that I'd like to see a more modular cube (for example, perhaps visualizer and warmer could be separate npm modules) and people should be free to configure those modules however they want.

Marsup commented Sep 10, 2013

Actually my plan is to have a raw object for default settings, use lodash merge function on the call of the server to modify that shared object.
Pros:

  • Same API as before, but the configuration keys wouldn't be the same if we keep this tree-like organization
  • You can partially override the default options without repeating everything, server options would be mandatory

Cons:

  • Would be wiser to replace underscore with lodash to avoid duplicate dependency
  • Can't run 2 servers in the same node process without taking the risk to mix up their options

What do you think ?

Another similar solution would be to keep underscore, flatten the whole options object, and use extend instead of merge. That seems a bit messy for readability and keys naming, but it's doable.

Collaborator

RandomEtc commented Sep 12, 2013

What do you think ?

Less is more. Fewer dependencies. Raw objects and no merge/extends/inherits/defaults please.

Marsup commented Sep 12, 2013

OK, I see where I'm going now :) Do we agree on the single point of configuration ?

Marsup commented Sep 13, 2013

@RandomEtc Tell me what you think of this solution.
As I've told you, multiple servers in a single node process will be complicated.

Marsup commented Sep 20, 2013

For the record I still experience partially unresponsive evaluator if I ask for data past horizon, I think the computation never gets done because start and end date are equal, so any further request on the same collection will stall.

Another thing I noticed is pretty poor performance on grouped queries, a mongodb distinct on several million un-indexed rows is supposed to take a while but still...

Collaborator

RandomEtc commented Sep 20, 2013

I hear that. I'm running this with personal data but I haven't thrown
square data at it yet. How do you feel about merging into a 0.3 branch and
refining things there?

Marsup commented Sep 20, 2013

Have you given some thoughts on my new configuration proposal ?
Do you want to complete your check-list up there before moving to 0.3.x or will it be done along the way ?
I think it's not a perfect release and there are still known bugs, but at least we'd move on with tinier patches rather than this long running commit list, so why not, but maybe not publish to npm just yet.

Marsup commented Oct 24, 2013

@RandomEtc After running the service for a while, I was forced to remove a "feature" infochimps added. Cascading cache to the lowest tiers is a very bad idea. It sucks the MongoDB storage like hell, way too expensive for insignificant benefit considering a few seconds/minutes is not that long to re-compute, so I came back to the way things were in the current cube release. Might want to consider this before doing a release...

I understand your feedback and that's definitely a valid concern, but want to clarify a little. First, as far as speed, a response time of seconds to minutes per query, given 20 to 30 queries on a page, wasn't acceptable for our UI requirements. So, stored metrics did offer some speed improvements. Although it was an added bonus, our intent was not to cache calculations for speed, but to store data. Hopefully I can offer some insights on why we did it that way.

For our use, event data vastly outsized metric data, so we purposefully capped the events collection to make event records fall out. To preserve the data contained in those events, we saved the metrics at the lowest tier. With the lowest tier, we could build back up a higher tier metric, like 5 minute or 1 hour, using those low tier 10 second metrics. We stored our permanent data in metrics with ephemeral events, as opposed to the previous situation of permanent events with ephemeral metric caches.

So assuming that one has large event record data sizes with many events per 10 second tier, storing only the metrics should use much less data. In our use case, it meant we were able to roll up thousands of multiple kB events into a handful of small, sub kB sized metrics per query. We also had a separate "cleaner" cron job to remove metrics older than a day, or so, to keep the data size down. We wrote our version with the intent of optimizing for high throughput while keeping a small, unsharded mongo. Storing metrics actually ended up being significantly more storage efficient for us.

I can see how for other data shapes / use cases, storing all metrics may not make sense. We definitely cut a couple corners to fulfill our use case because not everyone wants to predefine queries, drop events, and handle dense data. We lost some of the flexibility offered by cube in order to meet our needs. It sounds like our version didn't fit your use case. I'm glad you were able to change it to better meet your needs.

I hope that cleared up our intentions. If not, I'm happy to clarify further.

Marsup commented Oct 24, 2013

Hello Houston !

First, reading it back, my previous comment seems more accusing that it was meant to be, so sorry for that.

Now I can see why you would do that, in my case I keep everything, no capped collection at all, and I also have many events for any given time, our situation should be similar, so you might understand my pain seeing metrics grow horribly fast :)

The difference might be I'm on a sharded mongo, with many evaluators to answer queries at the same time, and we mostly stream metrics (which is a difference of our fork) so full time ranges are not queried that often.

Your version definitely improved many things and I'm grateful for that, I'm just worried such a default setting would disappoint newcomers as it fills up several GB for only a few days/weeks of metrics. I think ideally this cascading aspect should be configurable, but that'll be the subject of another pull request ;)

Anyway it's nice to see you're still following things here !

Collaborator

RandomEtc commented Oct 24, 2013

Thanks both for keeping the discussion going. I'm sorry I've been silent on most matters. I haven't had a chance to try this new branch on real data so I'm hesitant to express too strong of an opinion about it.

I'm open to landing it as a 0.3-pre branch here so there's a clearer target for new contributions/optimizations/docs. What do you think?

Marsup commented Oct 24, 2013

Well, this thread has lasted long enough I think :)

You have raised many concerns along the way so I would say do that branch and close this pull request, but let's not forget anything here, and maybe create a bunch of separate issues to track every doubt/task that needs to be dealt with before final release.

Collaborator

RandomEtc commented Oct 24, 2013

Sounds like a plan. Thanks for your help triaging issues!

when is it planned to be merged?

This merge is (in my opinion) very important, why hasn't it been merged yet?

Collaborator

RandomEtc commented Jan 23, 2014

Hi @jeffhuys - this branch is stalled mainly because I ran out of time/bandwidth for the project. But also because we only have one person (@Marsup) who has run this code to date. I'd like to run it before I merge it but I haven't had time.

If you've followed our discussion above, and the related issue where I asked for community input into the merge, you can see I was very optimistic about bringing all the Infochimps changes into the Square cube repo, but the actual process of doing this was a lot more complex and time consuming than I imagined. We don't run Cube in an official/production capacity at Square any more, so it's more or less a volunteer side-project for me.

Major apologies to @Marsup for not using this integration work yet.

Next step remains setting up a new branch here for this work, and getting a few more people to try it out for their use-case. I'll try to get that done soon, including trying it out at Square. Until then, please comment on this thread if you've run @Marsup's version and let us know how it goes.

Hi,
running https://github.com/Marsup/cube/tree/full-merge in a testing/dev environment for the last weeks and so far no problems. Amount of data is relatively low though - ~100,000 events.

Just as a sidenote the revert to plain js object for the config in that branch made my live a lot easier.
Thanks everyone for the effort in this.

Marsup commented Jan 23, 2014

I'm confident as well, I had this branch running in production since September (with a few modifications since then as the commit history will tell you), but beware it doesn't only contain infochimps modifications, so not everything is documented as it should be.

I'm also glad I came to reason for the config, imposing cfg in cube's core was not very clever, even though it's a very nice module and I still use it for my cube runners.

aganov commented Feb 27, 2014

I'm going to test this branch with 150K events/day @Marsup can you tell me what is the job of the "warmer" and is there any way to use only one config, instead of three, which are almost the same?

Marsup commented Feb 27, 2014

I'm not the one who conceived it so I'll try to describe it as best as I know it, I'm not using it either.
If you store your expressions in a specific collection, it will regularly take them and keep your metrics cache up-to-date even without a client asking for it.
The way to store expressions is inherited from the time cube had a kind of dashboard, there is no documentation about it AFAIK.

As for configuration, nope afraid not, you can still use cube as a module and do the slight variations programmatically.

ticean commented Apr 1, 2014

Hello. I've been using the current Cube version for some time. Great work and thanks for open sourcing this project.

Question 1: What's the level of confidence and timeline that the InfoChimps branch will be merged? I need to add some additional features in our project (authentication). Since this contains a pluggable authentication system, it makes sense for me to go ahead and use what's here if it will be mainlined. Looks like there's been a lot of energy put into this branch, but it's long-running and hard for me to judge what's going on from the outside looking in. :)

Question 2: How can I override the configuration when using Cube as a library now? Looks like this line will always include the configuration file from Cube. Maybe I'm missing some cfg functionality that handles this? I'm trying to override with env vars according to cfgs readme, but they don't seem to register. A simple example would really help.

Thanks.

Collaborator

RandomEtc commented Apr 1, 2014

@ticean my intention was to make a merge branch and update our readme to encourage people to try it out. Unfortunately Cube has become less and less of my day-job here at Square and since starting this merge, despite the heroic efforts of @Marsup (thank you!) I haven't carved out the time to make much progress.

Also since we started this project infochimps was acquired, so I suspect they haven't been able to give it the attention they wanted either.

I still have this on a TODO list, and hope to get to it one day soon, though I realize we are very likely to be losing goodwill and attention by letting this branch linger.

Enough excuses...

For using cube as a library, here's an example collector script that we have in our internal cube repo:

#!/usr/bin/env node

var options = require("../config/collector"),
    cube = require("cube"),
    server = cube.server(options);

server.register = function(db, endpoints) {
  cube.collector.register(db, endpoints);
};

server.start();

The require for cube is the stock one from npm. The ./config/collector.js file looks something like:

module.exports = {
  "mongo-host": "127.0.0.1",
  "mongo-port": 27017,
  "mongo-database": "cube_development",
  "http-port": 1080
};

Hope that gets you started. If you have a chance to checkout @Marsup's branch please do, any feedback on that will help others work out which version to use. Until then, now we have 3 versions...

https://xkcd.com/927/

ticean commented Apr 2, 2014

Hi @RandomEtc. Thanks for the help and the quick reply! You helped me realize that I was testing with the wrong branch. This PR is based on Marsup:infochimps-merge but I'd mistakenly branched square:infochimps-merge for testing. So my bad there. 😁

Now using @Marsup's branch, I'm able to override the config like I need to (that wasn't possible in square:infochimps-merge)

I had some problems with the horizon feature not returning results when the request is "past_horizon". I see a metalogging output, but the server doesn't return a response and hangs. Don't think I'm interested in this feature anyway, so I was able to work around by removing the horizon configuration. This disables the feature. Some docs about horizons would be helpful, but you've already mentioned this a few times in the thread so I know you know that. :)

Things are otherwise working well now that I'm using this branch. I'll keep testing and let you know if anything else comes up.

ticean commented Apr 3, 2014

Ok, after more hands-on time with this code I found some issues.

  1. Collections aren't created automatically now. This comment informs me that I have to manually create them. Cube's flexibility to create collections as it gets events is a really good feature. Really sad to see this feature's getting dropped.
  2. More evaluator hangs. I've hit cases where metalogging logs an error and subsequent requests aren't handled. I think it's because callbacks aren't called on error? Like here. It would be better for the process to crash than hang.

Marsup commented Apr 4, 2014

  • I have had similar issues with horizons during my tests and haven't found a way to make it work, but I don't use this feature and so it's highly unlikely I'll spend time on it, though I encourage you to give it a try.
  • I never ever had to create collections manually, you shouldn't have to either, I don't understand the meaning of this comment since there are no schema files anywhere in the project.
  • You seem to have troubles with your mongodb, never encountered this error/hang, this doesn't mean the code is right but you should check your mongo.

Beware that full-merge is not the exact same thing as this pull request, I've piled up other modifications for my own needs.

zuk commented Jul 7, 2014

Hey guys, can someone explain the status of this merge for those of us wanting to start using cube with all of the infochimps work merged in? What would be the best place to start? Clone this branch and go from there? Use the Marsup fork? GitHub is kind of useless right now in situations where the repo network gets complicated like this one :(

Collaborator

RandomEtc commented Jul 30, 2014

I am declaring Cube-maintenance-failure for myself. I have updated the README here to indicate that nobody at Square is actively developing or maintaining Cube. Since I have failed to make progress on this branch I encourage people to help @Marsup with his integration branch and fork if you have any new features or bug fixes. I will be closing all issues here in a moment.

RandomEtc closed this Jul 30, 2014

zuk commented Jul 31, 2014

For the record, I'm up and running with the @Marsup branch. Working great so far.

Collaborator

RandomEtc commented Jul 31, 2014

Excellent, that's great news @zuk. I'm open to updating the status and the Cube homepage with more info in future if you, @Marsup and others want to publish a new version. Thanks for letting us know!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment