-
Notifications
You must be signed in to change notification settings - Fork 225
-
Notifications
You must be signed in to change notification settings - Fork 225
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Incorporate market impact and user feedback as a part of FLEDGE experiment goals #95
Comments
Hi Lionel! I would have considered both of your goals as pieces of letting the "ads ecosystem [...] evaluate its usability". In general, browsers implement APIs and the web developers of the world try them out and give us feedback on how well they work, and "how well they work" is quite broad, covering anything the users of the APIs are hoping to get out of them! |
Hi Michael, Following the last IWABG conf call, I understand Chrome's position of not willing to define metrics for all participants, but providing the tools for everyone to make their own experiments. At Criteo, we therefore propose to IWABG participants -advertisers, publishers, ad tech providers, privacy experts, individuals, scholars- to discuss, in this thread, what they would consider as good FLEDGE experiment metrics. Our goal here is not to come up with a unique set of metrics for everyone, but rather to trade ideas and see what comes out. I'd like to point out the distinction between metrics, something measurable as average CPM for instance, and use cases, such as "lookalike targeting" or "niche advertising". I suggest that we focus on metrics here, as use cases will differ from one actor to another (even though different use cases may call for different metrics). So starting the conversation with Criteo, we think of using two metrics on a variety of use cases:
To compute an NPS, one runs a multi-choice survey with a numeric value assigned to each choice, usually some choices with a negative value, and some with a positive one. The final metric is the sum of all answers' values. Dear participants to the IWABG, what metrics are you planning to look at when experimenting with FLEDGE? Why? |
The CMA lats December publication offers a new framework for that discussion, so closing the issue. |
The FLEDGE proposal states:
While gaining implementer experience is a valuable thing, we think that any experimentation should also incorporate the two goals below, as they will be required for any third-party cookies replacement:
Would you consider adding these two objectives as "experiment goals" for FLEDGE?
The text was updated successfully, but these errors were encountered: