Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Process for Evaluating this version #12

Closed
mixmix opened this issue Aug 24, 2019 · 4 comments

Comments

@mixmix
Copy link
Collaborator

commented Aug 24, 2019

This is a thread for :
✔️ how to evaluate this particular iteration of the experiment
✔️ what we might measure
✔️ when we should close this iteration / transition to the next
✔️ announcing if you'd like to be involved in the evaluation / reflection

What this thread is not for:
complaining about the module
giving feedback about the module - this is a meta discussion about evaluation process

@osmarks

This comment has been minimized.

Copy link

commented Aug 25, 2019

Wouldn't it have been better to set out evaluation standards before beginning the "experiment"?

You might want to look at user feedback, how much development this has paid for, how it would scale, and feedback of sponsors (e.g. Linode) to this.

@mixmix

This comment has been minimized.

Copy link
Collaborator Author

commented Aug 25, 2019

Wouldn't it have been better to set out evaluation standards before beginning the "experiment"?

Yes! Also worth remembering that this experiment is also open source and totally run on volunteer time.

@mvaldesdeleon

This comment has been minimized.

Copy link

commented Aug 25, 2019

  1. Does it fulfil its financial objectives (i.e., gets you the target funding).
  2. Is it scalable (i.e., could you use the same technique for higher funding targets, or is there an inherent ceiling)
  3. Is it sustainable (i.e., can you apply the same technique repeatedly, without diminishing returns)

Since the current approach seems to be "getting funded first, and providing advertisements for the sponsors after", item 1 does not really make sense here: This item was satisfied before the campaign even began. So clearly as a funding technique, this one worked as expected.

For item 2, I would say it its ability to scale is limited by the size of the sponsor companies/organizations. Once you've reached the point in which you're selling ads to the largest organizations, at the maximum possible asking price, you would only be allowed to scale by adding more ad space (i.e., showing more than one banner at a time).

For item 3, well, this is the muddy part. Ads are interesting because you have an audience. If your audience rejects these ads and refuses to be part of it (i.e., by installing ad blockers), then the value of your ads diminishes. On the same page, if the target audience reacts negatively to the presence of ads in a previously-pristine area, and it starts impairing your brand, the current sponsors might be reluctant to book the next campaign, for fear that it also impacts their brand as well.

What you could measure here would be your capabilities to secure another round of sponsoring.

@feross

This comment has been minimized.

Copy link
Owner

commented Sep 4, 2019

I appreciate the thoughtful discussion and feedback. I ended the experiment last week. I shared some thoughts about how the experiment went from my perspective on my blog: https://feross.org/funding-experiment-recap/

@feross feross closed this Sep 4, 2019

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
4 participants
You can’t perform that action at this time.