Skip to content

Latest commit

 

History

History
119 lines (83 loc) · 7.51 KB

06_judges.md

File metadata and controls

119 lines (83 loc) · 7.51 KB
layout title parent description nav_order has_children permalink main_classes
guide
Judging Criteria
Virtual Hackathons
Judging Criteria & Methodology
600
false
/playbook/hackathons/judging-criteria
-no-top-padding

Judging Criteria & Methodology

Once you have created your hackathon’s prizes, make sure that participants are aware of the criteria they will need to follow in order to create a successful product submission, as well as the methodology judges will use to implement them.

This can, of course, be blind to hackers, however the more transparent you are with the judging criteria and technique, the more accurate the results of your event will be. In other words, this will help to avoid teams running off track and it will help them stick to the challenge you are looking for them to solve.

For our #ShockTheWeb and #LegendsOfLightning hackathons, we used the following transparent judging methods and criteria:

Methodology

Projects are evaluated on the 7 criteria (below). For each criteria, judges can give a score between 1-10. The scores will then be averaged out and summed up to give each project a total score out of 70. Projects are then ranked on a leaderboard and given prizes accordingly.

Judges would use a Google Form scoresheet to input their scores, available for us to view and manage in a spreadsheet on the backend. We take this approach to avoid any judges peaking at other scores and introducing biases into the process.

Opt-in scoring

If your event has a large project to judge ratio, judging workload will become overwhelming and it may be worth using an “opt-in” judging strategy. This gives the judges the ability to opt-in to evaluating the projects and criteria they feel match their interests and expertise. If a judge does not feel they are able to fairly evaluate a project or specific criteria, or do not have the time to do so, they can abstain from scoring a certain criteria (or all 7 criteria of a project in total), leaving the score as blank or marking “n/a”. A project’s score will consist of the sum of their criteria’s averages (criteria scores will be added up and divided by the no. of judges who added a score).

A note on bias

Some of your hackathon’s judges will have their own agendas and opinions, introducing an inevitable sense of bias into your judging process. This is often unavoidable, however you can mitigate the effect of bias by introducing more judges, as well as choosing judges from different backgrounds or expert fields.

Qualifying, Semis, and Finals

For larger tournaments such as #LegendsOfLightning, we had a semi-finals and finals round in order to narrow the focus onto a handful of projects, provide them with additional mentorship and coaching, and ultimately to make the final round of judging more accurate and less painstaking.

Judging Criteria

1. Value Proposition 🎯

Does the project have a product market fit? Does it provide value to the bitcoin ecosystem and beyond?

Example scores 3/10 - Terrible or non-existent use case 7/10 - Interesting use case, improvement upon similar ideas 10/10 - Bitcoin's next unicorn

2. Innovation 🧪

Is it something we've seen before or does it bring something new to bitcoin and its users (or potential users)?

Example scores 3/10 - Carbon copy of another project 7/10 - Rethinking outside the box 10/10 - Interesting and original idea

3. Bitcoin Integration & Scalability ⚡️

Have they used bitcoin/lightning? If so, how many features? How well will this product scale for either local or global adoption?

Example scores 3/10 - Little to no use of bitcoin/lightning 7/10 - Uses bitcoin/lightning and would scale to moderate no. of users 10/10 - Uses bitcoin/lightning and would scale to global audience

4. Execution ✅

Makers should focus on attention to detail. How well has the project and its vision been executed? How well does it function on both front-end and back-end?

Example scores 3/10 - Poorly built product, did not follow through on vision 7/10 - Well built product (both FE + BE), stuck to vision 10/10 - Immaculately built product (both FE + BE), outshone original vision

5. UI/UX Design 🍒

When we think about adoption and usability, design is high up the list. Taking into account both UI + UX, how well has the application or feature been designed?

Example scores 3/10 - Bad UI, UX, Branding 7/10 - Good UI, UX, Branding 10/10 - Incredible UI, UX, Branding, and Marketing. Innovative abstraction of bitcoin technology or cryptographic functionality.

6. Transparency 👁

In order to create more transparency around ongoing work processes, targets, and achievement, we asked makers to #BuildInPublic by tagging their projects in stories and progress reports on our platform. These consist of weekly progress (PPPs) reports, as well as other announcements and stories that are relevant to their project or their experience of the tournament. We value transparency, detail, depth, and effort.

Example scores 3/10 - Has written 1-3 reports in low detail/transparency 7/10 - Has written 3+ reports in high detail/transparency 10/10 - Has written 5+ reports in medium/high detail/transparency. Can also include extra reports, launches, announcements, and engagement on other platforms (Twitter, Stacker News, Discord), etc.

7. Je Ne Sais Quoi 🤩

That special ingredient that gives the project pizzazz, making it stand out from all others.

Selecting Judges

Depending on what type of event you are running, how transparent you wish the judging process to be, and how much time will be allocated to it, you may want to elect either internal or external judges.

Internal Judges

If your hackathon is focused on your own product or protocol, where people in your company are the experts, you may decide to have your own team act as the judges. You can elect an internal panel of employees to judge and score projects, and decide on who they feel should be the winner. This process can be as transparent as you’d like.

Pros ✅ Cons ❌
Full control Less diversity of opinion
Time efficient Less marketing outreach
Option of transparency ...

External Judges

If you are looking to bring in external industry influencers, you can opt to elect judges and have them take responsibility for your judging process. You can manage this by using the methodology listed above. This approach takes more time and effort with communications, and might not always lead to the results you think are best. You are essentially leaving big decisions in the hands of others and cannot guarantee their, a). participation, and b). full effort and attention.

You will need to communicate clearly with judges, and provide them with the judging details (method, criteria, relevant forms/docs) in advance so they can be prepared. You could also invite them to join a single private chat group to answer any follow-up questions, avoiding the need to repeat the same information to two different parties.

Pros ✅ Cons ❌
More diverse opinions + expertise Usually some bias is involved
More marketing outreach Can't guarantee interest, effort, and attention
Less workload for your team Can't guarantee paricipation