Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Write an introduction to code reviews explaining how to make the best of a code review #4

2 changes: 2 additions & 0 deletions .cspell.json
Original file line number Diff line number Diff line change
Expand Up @@ -182,6 +182,7 @@
"htmlcov",
"hyperscale",
"Idempotency",
"IMDB",
"inclusivelint",
"inclusivity",
"inferencing",
Expand Down Expand Up @@ -271,6 +272,7 @@
"netcat",
"newarchitecture",
"Newtonsoft",
"NGSA",
"Nocera",
"noninteractive",
"NPMJS",
Expand Down
2 changes: 1 addition & 1 deletion docs/automated-testing/fault-injection-testing/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ Fault injection methods are a way to increase coverage and validate software rob
* **Error** - That part of the system state that may cause a subsequent failure.
* **Failure** - An event that occurs when the delivered service deviates from correct state.
* **Fault-Error-Failure cycle** - A key mechanism in [dependability](https://en.wikipedia.org/wiki/Dependability): A fault may cause an error. An error may cause further errors within the system boundary; therefore each new error acts as a fault. When error states are observed at the system boundary, they are termed failures.
(Modeled by [Laprie/Avizienis](https://www.nasa.gov/pdf/636745main_day_3-algirdas_avizienis.pdf))
(Modeled by [Laprie/Avizienis](http://rodin.cs.ncl.ac.uk/Publications/avizienis.pdf))

#### Fault Injection Testing Basics

Expand Down
4 changes: 2 additions & 2 deletions docs/automated-testing/performance-testing/load-testing.md
Original file line number Diff line number Diff line change
Expand Up @@ -60,7 +60,7 @@ Here are a few popular load testing frameworks you may consider, and the languag
- **Artillery** (<https://artillery.io/>) - Write your scenarios in Javascript, executes a node application.
- **Gatling** (<https://gatling.io/>) - Write your scenarios in Scala with their DSL.
- **Locust** (<https://locust.io/>) - Write your scenarios in Python using the concept of concurrent user activity.
- **K6** (<https://k6.io/>) - Write your test scenarios in Javascript, available as open source kubernetes operator, open source Docker image, or as SaaS. Particulary useful for distributed load testing. Integrates easily with prometheus.
- **K6** (<https://k6.io/>) - Write your test scenarios in Javascript, available as open source kubernetes operator, open source Docker image, or as SaaS. Particularly useful for distributed load testing. Integrates easily with prometheus.
- **NBomber** (<https://nbomber.com/>) - Write your test scenarios in C# or F#, available integration with test runners (NUnit/xUnit).
- **WebValidate** (<https://github.com/microsoft/webvalidate>) - Web request validation tool used to run end-to-end tests and long-running performance and availability tests.

Expand All @@ -69,7 +69,7 @@ Here are a few popular load testing frameworks you may consider, and the languag
In the case where a specific workload application is not being provided and the focus is instead on the system, here are a few popular sample workload applications you may consider.

- **HttpBin** ([Python](https://github.com/postmanlabs/httpbin), [GoLang](https://github.com/mccutchen/go-httpbin)) - Supports variety of endpoint types and language implementations. Can echo data used in request.
- **NGSA** ([Java](https://github.com/retaildevcrews/ngsa-java), [C#](https://github.com/retaildevcrews/ngsa-java)) - Intended for Kubernetes Platform and Monitoring Testing. Built on top of IMDB data store with many CRUD endpoints available. Does not need to have a live database connection.
- **NGSA** ([Java](https://github.com/retaildevcrews/ngsa-java), [C#](https://github.com/retaildevcrews/ngsa-java)) - Intended for Kubernetes Platform and Monitoring Testing. Built on top of an IMDB data store with many CRUD endpoints available. Does not need to have a live database connection.
- **MockBin** (<https://github.com/Kong/mockbin>) - Allows you to generate custom endpoints to test, mock, and track HTTP requests & responses between libraries, sockets and APIs.


Expand Down
19 changes: 16 additions & 3 deletions docs/code-reviews/README.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Code Reviews

Developers working on projects should conduct peer code reviews on every pull request (or check-in to a shared branch).
Code reviews are a required part of the software development process, offering numerous benefits such as improved code quality, shared knowledge, and early bug detection. However, the effectiveness of these reviews largely depends on the approach and mindset of the participants.

## Goals

Expand All @@ -10,8 +10,21 @@ Code review is a way to have a conversation about the code where participants wi
- **Learn and grow** by having others review the code, we get exposed to unfamiliar design patterns or languages among other topics, and even break some bad habits.
- **Shared understanding** between the developers over the project's code.

## When do I conduct a code review?

Developers working on projects should conduct peer code reviews on every pull request (or check-in to a shared branch).
skyarkitekten marked this conversation as resolved.
Show resolved Hide resolved

## How do I make code reviews effective?

Take a quick peek at [how to maximize the effectiveness of code reviews](review-effectiveness.md).

## What are common issues code reviews catch?

Though not an exhaustive list, some of the most common issues that code reviews can help catch are described [here](common-issues.md).

## Resources

- [Code review tools](tools.md)
- [Google's Engineering Practices documentation: How to do a code review](https://google.github.io/eng-practices/review/reviewer/)
- [Best Kept Secrets of Peer Code Review](https://static1.smartbear.co/smartbear/media/pdfs/best-kept-secrets-of-peer-code-review_redirected.pdf)
- [Common issues](common-issues.md)
- [Maximizing the effectiveness of code reviews](review-effectiveness.md)
- [Sample code review session agenda](sample-agenda.md)
29 changes: 29 additions & 0 deletions docs/code-reviews/common-issues.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,29 @@
# Common issues

Here are some of the most common issues that code reviews can help catch:

1. **Bugs and Logical Errors**: Detecting bugs or logical mistakes that might not be evident during initial coding or testing.

2. **Code Standards Violation**: Ensuring the code adheres to established coding standards and conventions for readability and maintainability.

3. **Performance Issues**: Identifying inefficient code that could lead to performance problems, like slow execution or excessive resource usage.

4. **Security Vulnerabilities**: Spotting security flaws, such as SQL injections, buffer overflows, or improper handling of user data.

5. **Code Duplication**: Finding and reducing redundant code, which can make maintenance harder and increase the risk of bugs.

6. **Lack of Scalability**: Recognizing code that may not scale well with increased load or data volume.

7. **Poor Error Handling**: Pointing out insufficient or incorrect error handling that could lead to crashes or unpredictable behavior.

8. **Inadequate or Missing Documentation**: Ensuring that code is well-documented for ease of understanding and future maintenance.

9. **Compatibility Issues**: Checking for compatibility with different browsers, operating systems, or hardware, especially in web and mobile app development.

10. **Design Flaws**: Identifying flaws in the software design, like tightly coupled components or violation of [SOLID principles](https://www.digitalocean.com/community/conceptual_articles/s-o-l-i-d-the-first-five-principles-of-object-oriented-design), which can affect flexibility and testability.

Addressing these issues during the review process significantly improves the quality and reliability of the software.

## Resources

- [SOLID Principles](https://www.digitalocean.com/community/conceptual_articles/s-o-l-i-d-the-first-five-principles-of-object-oriented-design)
2 changes: 1 addition & 1 deletion docs/code-reviews/pull-requests.md
Original file line number Diff line number Diff line change
Expand Up @@ -88,6 +88,6 @@ See also [Pull Request Template](pull-request-template/pull-request-template.md)
* [Google approach to PR size](https://google.github.io/eng-practices/review/developer/small-cls.html)
* [Feature Flags](https://www.martinfowler.com/articles/feature-toggles.html)
* [Facebook approach to hidden features](https://launchdarkly.com/blog/secret-to-facebooks-hacker-engineering-culture/)
* [Azure approach to canary releases](https://learn.microsoft.com/azure/architecture/framework/devops/release-engineering-cd#stage-your-workloads)
* [Azure approach to canary releases with K8](https://github.com/MicrosoftDocs/azure-pipelines-canary-k8s)
* [Conventional Commits specification](https://www.conventionalcommits.org/en/v1.0.0-beta.2/)
* [Angular Commit types](https://github.com/angular/angular/blob/22b96b9/CONTRIBUTING.md#type)
2 changes: 1 addition & 1 deletion docs/code-reviews/recipes/markdown.md
Original file line number Diff line number Diff line change
Expand Up @@ -48,7 +48,7 @@ A comprehensive list of markdownlint rules is available [here](https://github.co

### proselint

[`proselint`](http://proselint.com/) is a command line utility that lints the text contents of the document. It checks for jargon, spelling errors, redundancy, corporate speak and other language related issues.
`proselint` is a command line utility that lints the text contents of the document. It checks for jargon, spelling errors, redundancy, corporate speak and other language related issues.

It's available both as a [python package](https://github.com/amperser/proselint/#checks) and a [node package](https://www.npmjs.com/package/proselint).

Expand Down
22 changes: 22 additions & 0 deletions docs/code-reviews/resources/meeting-template.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,22 @@
SUBJECT: Code Review [PR#/Description]


Hello [TEAM],

I would like to conduct a code review on [PR #/Description/link].

Your homework before the review is to spend a short amount of time reviewing the PR.

Our Agenda for the review is as follows:

Overview of the change
Walkthrough of code changes
Review and Discussion
Action Item generation
Wrap-up and Follow-up

I’m looking forward to great collaboration and feedback!

Thank you

[AUTHOR]
34 changes: 34 additions & 0 deletions docs/code-reviews/review-effectiveness.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,34 @@
# Maximizing the Effectiveness of Code Reviews

Code reviews play a crucial role in software development, providing significant advantages including enhanced code quality, knowledge sharing, and the early identification of bugs. The success of these reviews, however, is greatly influenced by the **attitudes** and **methods** adopted by those participating. Key elements that make code reviews more productive and positive experiences for everyone involved follow:

## Preparation is Key
The success of a code review begins long before the actual meeting. Reviewers should familiarize themselves with the code in advance, while authors need to provide context and objectives. This preparation ensures that the review session focuses on constructive feedback rather than basic understanding.

## Embracing a Collaborative Mindset
A collaborative approach is vital. Participants should approach code reviews as a [safe space](https://www.youtube.com/watch?v=IH67P7EMnt0) to collaborate, review, and improve solutions. Reviews are a team effort to enhance the overall project, not as a critique of an individual’s work. Openness to feedback, respectful communication, and a willingness to learn are paramount. Remember, it’s about improving the code, not judging the coder.

## The Importance of Effective Communication
How feedback is given and received can make or break a code review session. Feedback should be specific, constructive, and focused on the code, not the person who wrote it. Clear and respectful communication fosters a positive environment and encourages learning and growth.

## Quality Over Ego
Check your ego at the door. Keeping the focus on the quality of the final product helps in setting aside personal preferences and ego. The primary goal is to enhance the code's readability, maintainability, and functionality.

## The Review Process
A typical code review involves an overview by the author, a thorough walkthrough of the changes, discussions, and the compilation of action items. It's crucial to be detail-oriented but also keep the big picture in mind, ensuring that the code aligns with the project's overall direction. See [Sample Code Review Session Agenda](sample-agenda.md).

## Post-Review Actions
After the review, authors should act on the feedback, refining and improving the code. Follow-up reviews might be necessary for significant changes, ensuring continuous improvement and adherence to project standards.

## Conclusion
Effective code reviews are not just about catching errors; they are opportunities for team growth and learning. By preparing adequately, fostering a collaborative and respectful environment, focusing on constructive feedback, and prioritizing the project's quality, teams can make the most out of these sessions, leading to robust, efficient, and maintainable software.

## Remember
Code reviews are a continuous learning process. Encourage all team members to participate, share their knowledge, and view each session as an opportunity to improve both the code and their own skills. With the right mindset and approach, code reviews can be a valuable asset in any software development project.

## Resources

- [Best Kept Secrets of Peer Code Review](https://static1.smartbear.co/smartbear/media/pdfs/best-kept-secrets-of-peer-code-review_redirected.pdf)
- [Further reading: Google's Engineering Practices documentation - How to do a code review](https://google.github.io/eng-practices/review/reviewer/)
- [Psychological Safety in Code Reviews](https://www.youtube.com/watch?v=IH67P7EMnt0)
- [Sample code review session agenda](sample-agenda.md)
35 changes: 35 additions & 0 deletions docs/code-reviews/sample-agenda.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,35 @@
# Sample code review session outline

1. **Preparation**
- Reviewers familiarize themselves with the code changes before the meeting.
- The author of the code provides context and objectives of the changes.

2. **Overview by Author**
- Brief presentation by the author explaining the main changes, the rationale behind them, and any specific areas where feedback is sought.

3. **Walkthrough of Code Changes**
- Go through the code changes file by file or feature by feature.
- The author can highlight key areas and explain the logic or approach used.

4. **Review and Discussion**
- Reviewers provide feedback on various aspects such as design, coding standards, potential bugs, optimization opportunities, and readability.
- Discussion of alternative approaches or solutions where applicable.
- If the team is large, consider focusing on more significant issues to keep the session efficient.

5. **Action Items**
- Compile a list of action items and areas for improvement identified during the review.
- Assign responsibilities for addressing each item.

6. **Wrap-up and Follow-up**
- Summarize the session and thank participants for their contributions.
- Schedule any necessary follow-up meetings or reviews for major revisions.

7. **Post-Session**
- The author addresses the feedback and makes necessary changes.
- Optional: A follow-up review for major changes.

This structure ensures a thorough, collaborative, and productive code review session.

## Resources

- [Meeting Template](resources/meeting-template.txt)
2 changes: 1 addition & 1 deletion docs/design/design-patterns/rest-api-design-guidance.md
Original file line number Diff line number Diff line change
Expand Up @@ -91,6 +91,6 @@ In particular, when working in an existing API ecosystem, be sure to align with
* [Detailed HTTP status code definitions](https://www.restapitutorial.com/httpstatuscodes.html)
* [Semantic Versioning](https://semver.org/)
* [Other Public API Guidelines](http://apistylebook.com/design/guidelines/)
* [OpenAPI Design Practices](https://oai.github.io/Documentation/best-practices.html)
* [OpenAPI Design Practices](https://learn.openapis.org/best-practices.html)
* [Microsoft TypeSpec](https://github.com/Microsoft/typespec)
* [Microsoft TypeSpec GitHub Workflow samples](https://github.com/cse-labs/typespec-workflow-samples/)
2 changes: 1 addition & 1 deletion docs/observability/tools/KubernetesDashboards.md
Original file line number Diff line number Diff line change
Expand Up @@ -36,4 +36,4 @@ There are currently several UI dashboards available to monitor your applications
## References

- [Alternatives to Kubernetes Dashboard](https://octopus.com/blog/alternative-kubernetes-dashboards)
- [Prometheus and Grafana with Kubernetes](https://tanzu.vmware.com/developer/guides/kubernetes/observability-prometheus-grafana-p1/)
- [Prometheus and Grafana with Kubernetes](https://www.youtube.com/watch?v=dk2-_DbWb80)