-
Notifications
You must be signed in to change notification settings - Fork 3
Total FxA Accounts #68
Comments
Somewhere we have a daily job that runs a "COUNT (*) FROM accounts" job on our read-replica db, and pushes the results into heka for display in this old metrics dashboard: https://metrics.services.mozilla.com/accounts-detail-dashboard/ The simplest option here is probably to appropriate that and have it also pipe the data into S3 (if it's not there already) and then we can import it into redshift. @jbuck do you know the details of this daily job? I only have a vague memory of its existence. |
The database query that counted total accounts was disabled in September of last year, at @jrgm's request due to heavy database load. I believe there was a plan to move that to a read replica and re-enable it, but that may have never happened. Consequently I don't think there is a currently running job that counts total accounts, but I also have only vague memories of how this is set up. |
I'll re-enable this job on the read replica in us-east-1 & dump data onto the same S3 bucket that we currently use for pulling data into redshift |
I finally got this working - I'll check tomorrow morning to see if the job ran successfully overnight. If we want more up-to-date account totals I think we could run this more frequently than once a day. |
This is available in the same s3 bucket that we currently pull from for redshift but with a different path: |
Thansk @jbuck! Re-opening and assigning to phil & myself to figure out the import step. |
For visibility, the contents of this file are currently formatted as:
@jbuck how complex would it be to format this as a headerless CSV instead? I'm sure we can work with the current format, but if it's easy to reformat upstream than it might be simpler overall. |
Easy to change it, but I used the pipes because it's the default separator. PMed you the file for editing. It'll require a redeploy of fxa-admin to push the change live. @jrgm when you do the redeploy, make sure to nuke the old stacks |
from mtg: put it on the graph |
@philbooth I think it should be possible for us to use your generalized metrics import script for this now, let's discuss next week once we're sure we've got to the bottom of the sendSms metrics. |
Glad to see movement on this! :) |
@philbooth IIRC the latest recommendation is for us to pull people's ssh pubkeys from github; are you keys up-to-date in github? I should be able to add you back to the boxes. |
@rfk thanks! Yep, my GitHub keys are the updated ones. |
@philbooth I've added your key from https://api.github.com/users/philbooth/keys to the redshift-helper box, LMK if you still don't have access. |
Hey @jbuck, there's the odd day where no data is exported to S3, e.g. |
Ah crap, this is the verification reminders fault. I had to stand up a new
replica, but hadn't redeployed the fxa-admin with the updated RDS endpoint.
I fixed this and today it'll have export with correct data.
…On Thu, Jun 8, 2017 at 3:38 AM, Phil Booth ***@***.***> wrote:
Hey @jbuck <https://github.com/jbuck>, there's the odd day where no data
is exported to S3, e.g. 2017-06-03, 2017-06-07. Any idea what's up with
that?
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#68 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AAjTog_Kne6p9vAY7ygdGYBhnqZdlPfcks5sB6TngaJpZM4Mx2BW>
.
|
As a product manager, I should be able to know how many accounts have been created to date.
We currently don't have an easy way to easily measure the total number accounts that have been created to date.
As we work with marketing over 2017 to revive inactive FxA accounts, it would be nice to know how many accounts have ever been created.
It would also be nice to keep track of the proportion of active to active/inactive accounts.
As per how we roll-it up, we could have something like this (which doesn't load since it is too resource intensive in Re:Dash):
https://sql.telemetry.mozilla.org/queries/3916/source#7783
I'm open to other data formats though if we want more granularity.
The text was updated successfully, but these errors were encountered: