Skip to content

Latest commit

 

History

History
180 lines (103 loc) · 15.9 KB

README.md

File metadata and controls

180 lines (103 loc) · 15.9 KB

Project: insightUBC

UBC is a big place, and involves a large number of people doing a variety of tasks. The goal of this project is to provide a way to perform some of the tasks required to run the university and to enable effective querying of the metadata from around campus. This will involve working with courses, prerequisites, past course averages, room scheduling, and timetable creation.

This will be a full stack web development project split into four sprints. The first three sprints are server-side development using Node. The fourth sprint is client-side development. A fifth deliverable will also happen, but does not have dedicated time allocated to it because it is an aggregation of the first three sprints.

Development will be done with TypeScript and the packages and libraries you use will be strictly limited (for the first three sprints). If you do not know TypeScript, you are encouraged to start investigating the language soon. Its syntax is extremely similar to Java so it should be relatively easy to transition to given your prior experience. It is important to note we will spend very little time in lecture and lab teaching this language; you will be expected to learn it on your own time.

All sprint deliverables will be marked using an automated test suite. The feedback you will receive from this suite will be limited. To succeed at the project you will need to create your own private test suite to further validate each deliverable. Additional details are available on the AutoTest page.

Teams

The vast majority of software is written by development teams. Even within large organizations, 'feature teams' usually comprise of a small set of developers within a larger team context. You will work in pairs for this project. Your partner must be in the same lab section as you; if you want to work with someone who is in another section, one of you will have to transfer lab sections.

Your partner selection is extremely important; be sure to make this choice carefully, as you will be responsible for working as a team for the remainder of the term. You must use the same partner for the duration of the project, no changes will be permitted. If do not have a team organized after the first lecture, please go to your lab and find a partner there. Everyone should have partners by the end of the first week of labs (the end of the second week of the course).

Deliverables

Deliverable 0 is an individual activity to help you get acquainted with TypeScript and AutoTest. Sprints 1-3 will be part of your project. Deliverable 4 does not require anything to be handed in.

  1. Deliverable 0 - Deliverable 0

  2. Sprint 1 (d1) - Deliverable 1

  3. Sprint 2 (d2) - Deliverable 2

  4. Sprint 3 (d3) - Deliverable 3

  5. Project Quality Check (d4) - Quality Check

Language and environment

Your project will have to be written in TypeScript. While it might seem daunting to learn a new language on your own, the fluid nature of software systems requires that you get used to quickly learning new languages, frameworks, and tools. The syntax of TypeScript is extremely similar to Java itself, which you should have used in 210. Google will be your friend for this project, as there are thousands of free tutorials and videos that can help you with this technology stack. TypeScript has many great resources, but the TypeScript Handbook or the TypeScript Deep Dive would be good places to start. If you are starting from scratch, it is really important that you do not just read a bunch of code but actually write some. The TypeScript Playground or a JavaScript REPL can be a lightweight way to do this.

All development will take place on GitHub. You will need a GitHub account (but feel free to create a throw away or anonymous account for this course, if that makes you more comfortable); we will create the repository for your group after the first week of labs have finished. You will not be able to change your GitHub id during the term, so when you register your account, be sure to use the right one. Instructions on how to register your account with us will be given in lab. Being familiar with Git is essential; please take a look at the 'getting started' part of the Atlassian Git Introduction before the first lab if you are not familiar with Git. A shorter, less formal, guide is also available.

Allowable packages

The packages and external libraries (aka all of the code you did not write yourself) you can use for the project are limited and have all been included for you in your package.json. You cannot install any additional packages. It is notable that a database is NOT permitted to be used; the data in this course is sufficiently simple for in-memory manipulation using the features built into the programming language. Essentially if you are typing npm install or yarn install you will likely encounter problems.

Repositories

All development will take place in GitHub repositories that we will create for you in a private organization for the course. You will be automatically added to your repo after you have specified your groups in your lab section in the first week. Repositories will only be created for teams where both students are registered in the course; this list will finalize after the add/drop deadline.

Assessment

The first three sprints are evaluated differently than sprint 4.

Sprint 1, 2, and 3

Four components are assessed for these sprints:

  • AutoTest validation (functional completeness).
  • Personal test coverage.
  • Oral questions in lab (deliverable retrospective).
  • Retrospective questionnaire (online survey)

The general formula for grading is:

grade = (((AutoTest * .8) + (Personal Coverage * .2) ) * (Oral Questions * Questionnaire))

The best way to maximize your AutoTest and Coverage grades is to:

  1. Write your own local tests tests that comprehensively test your code against the project requirements. These tests can be run as often as you want and are the best way to debug your code. In our experience, teams that invest in creating a comprehensive test suite with effective assertions completes the project much more quickly.

  2. Invoke AutoTest frequently to ensure you have not introduced regressions into your code. Remember: you can only run the AutoTestsuite once every 12 hours and AutoTest will respond more slowly as the deadline approaches.

You do not need to submit your code on the deliverable deadline; we will automatically run our tests against every push you make to your repository while the deliverable; your score will be the score of the highest push during this time (up until the end of the grace period). AutoTest only provides output for a subset of the total test suite; some tests are witheld until after the deadline.

NOTE: On gaming coverage

Achieving 100% test coverage is often hard in practice. To account for this, we will increase your coverage score by 5% in our marking calculations (to a max of 100). The intent of the coverage component is for you to use white box testing to cover your own code; if you cannot invoke a block of code with your own unit and integration tests, we are unlikely to be able to either with our strictly integration-based test suite. Learning what code is important and can catch faults and what code is completely extraneous is an important skill, if you cannot devise a test case or input that can trigger a block, there is a real chance it is just technical debt adding needless complexity to your code. If we find projects that are artificially gaming the coverage metric by adding code solely designed to increase the total number of lines (to decrease the proportion of uncovered code), we will regrade the coverage score to 0 during the deliverable retrospective.

Retrospectives & Questionnaire

The deliverable retrospective (oral questions) will assign individual marks to each teammate to make sure both teammates contributed fairly to the deliverable outcome. The range for this component is [0..1]. Each teammate should have a clear understanding of how the system works at a high level (the major components, data structures, design decisions, and algorithms). The answers to the oral questions will be used as a scaling factor on your AutoTest score (e.g., if you get 80% from AutoTest and 0.5 from the oral questions because it was clear you did not contribute effectively to the deliverable, your deliverable grade would be 40%).

These oral questions will take place in labs the week the deliverable was due; if you do not attend this lab, you will receive 0 for the deliverable. Being an effective teammate involves both technical contribution and teamwork; if one team member 'shuts out' the other from contributing to the project, this can also have a detrimental influence on both multipliers. Teammates should be courteous to one another by keeping open lines of communication, and by pulling their weight while ensuring that others have the opportunity to pull theirs too.

Finally, a questionnaire is due for each deliverable for the teams to self-report their progress. The expectiation is that everyone who spends 15 minutes to complete this questionnaire will receive a 1, while those who fail to submit their questionnaire or do not provide meaningful feedback will be given a 0.

The oral questions and the questionnaire effectively scale your project mark. Any student who makes a fair effort in the project and submits the required form should receive a scale of 1 (and in our experience 95%+ of students do receive full marks on these components).

Sprint 4

The grading for this sprint comprises of two components:

  1. A regression test suite.

  2. A private test suite.

The formula for this grade is:

grade = (Regression Suite * .5) + (Private Suite * .5)

More details about these components can be found in the Sprint 4 description.

We will be running Measure of software similarity (MOSS) on all deliverable submissions. Any projects that contain code derived from other projects that we have not provided will receive 0% on that deliverable.

Late policy

It is possible to submit D0, D1 and D2 late for partial marks; other deliverables cannot be submitted late. Appeals for late marks must be made by the Deliverable 3 final deadline. Late deliverables will be subject to the following penalty:

  • 1 deliverable late: 50% (D0 by D1, D1 by D2, D2 by D3).
  • 2 deliverables late: 60% (e.g., D0 by D2, D1 by D3).

Late deliverables can only increase the test passing rate (coverage rate for D0). The retrospective multipliers and test coverage rate from the original deliverable will still be used. The # of deliverables late depends on the timestamp of the commit you ran you deliverable against (the # of deliverables late increments after each deliverable deadline). Use this form to submit your late request. Please talk to your team before you do this, we will only consider at most one late request per-team per-deliverable. This form will close on November 25 @ 0800.

FAQ

A series of FAQ items has been collected here; this is in no way exhaustive, but addresses several of the consistent questions we have received for this project.

FAQ: Failing Tests

  • If you are failing one of AutoTest's tests, it means that your own test suite is insufficient. The tests AutoTest runs are exactly the same as the ones you can write yourself. If a test is failing, it means your suite is not strong enough and should be strengthened. NOTE: this does not mean randomly writing more tests, but contentiously strengthening your test suite by examining the deliverable specification.

  • Another testing anti-pattern is to only have integration tests (e.g., tests that directly evaluate addDataset, removeDataset, listDatasets and performQuery). A much more robust testing strategy that makes it easier to implement new features and isolate failures is to write unit tests against the individual methods in your implementation.

FAQ: Coverage Not Right

  • AutoTest just runs yarn cover to calculate coverage. But it can only calculate what is committed. The most common coverage-based problem occurs when test files are not committed to the repo. This is easy to test: just do a git clone in a new folder and run coverage there and see what happens; this usually highlights missing files.

  • One other less common reason for coverage problems arises from filename case issues; linux (where AutoTest executes) has a case-sensitive filesystem; if you are developing on a Windows-based machine (which does not have a case-sensitive file system), tests could refer to files that can be 'found' on your machine but will not be found on a linux-based test instance.

FAQ: Using Branches:

  • Using version control branches is a great way to make it easier to work with your partner, but it is important that you merge your branches with master periodically (preferably with pull requests). Having > 3 branches is an anti-pattern, and stale branches should be deleted.