Brewhouse Engineering Principles and Practices
Table of Contents
- Brewhouse Engineering Principles
- Brewhouse Engineering Practices
Brewhouse Engineering Principles
We write code for our peers
We care about the person who will have to read, debug or extend our code. We want them to have a good time. We take extra time to turn "code that works" into code that’s clean, explicit, simple, robust, confident and tested… obviously!
We trust our peers
We trust our peers for making the best decisions and the best work they can.
We experiment (bleeding edge)
We experiment with new shiny technologies, tools and languages. We challenge ourselves, get out of our zone of comfort and expand our horizon. That’s what Friday times are for.
We ship good, old, stable
We ship software built on top of mainstream languages and tools that are stable and should be supported for years to come.
We know where engineering fits in the big picture and do our best to collaborate with product owners, designers and customers. We explain our constraints in words anyone can understand.
We make pragmatic decisions
When we evaluate decisions, we take into account the requirements, unknowns, experience, team, and deadlines. We know the tradeoffs implied by the decisions we make.
We take time to reflect on our practice and improve.
We acknowledge that we sit on shoulders of giants. We give back by contributing to open source. We share knowledge on our blog and on the Brewhouse Software Engineering Guidelines document.
Brewhouse Engineering Practices
The goal of a commit message is to help future developers to find out why a particular change was made to the code. We write commit messages that explain Why we made a change. We can look at the diff to know What the change is about.
- Not good: "Made the project container an inline-block"
- Good: "Fix project cards layout"
A commit is structured like this:
Short (50 chars or less) summary of changes More detailed explanatory text, if necessary. * Bullet points are okay, too.
You can read more on "How to Write a Git Commit Message"
We prefix our branches with one of the four following keywords:
feature/for new features
design/for UI changes or design spikes (a.k.a prototypes)
tweak/for other changes such as refactorings, improvements, ...
We turn a
design branch into a
feature one when we turn a prototype into a actual feature.
We rebase remote changes to prevent local merges from polluting the history (such as
Merge branch 'master' of github.com:BrewhouseTeam/goodbits)
git pull --rebase and add the following to your
~/.gitconfig to do it automagically on most branches
[branch] autosetuprebase = always
We setup our Text Editors to remove trailing whitespaces. This helps making commit diffs clean.
We prefer Clear Code over Comments:
- we use Good names that reflect the intent of a class or a method
- we extract complex code into methods or local variable with meaningful names
We order public methods with Requests first (
full_name, ...) then Commands (
We order private methods so that they always call a method that's written below them. It makes it easier to find the method and it prevents infinite recursions.
A couple of rules:
- Any code change should go through a Pull Request.
- A pull request gives a high level overview of what has been accomplished.
- A picture is worth a thousand words, so we add a screenshot if we've added a new feature or changed the UI. An animated GIF is worth a million words, so we showcase new UI interactions with one! (LiceCap does the job.)
- We review our own changes before creating a pull request: Is there code that's commented out? Are there tests? ...
- When we are not done but we want some feedback, we prefix the PR title with "[WIP]"
- Most pull requests should be reviewed and merged by another team member.
- When we make a change that's risky (new database migrations, new integration with third party service, ...) we (the author) should monitor the deploy to production.
Things to look at when reviewing a Pull Request:
- Is the code clear and robust?
- Is it well tested?
- Are the commits well written? Should they be squashed?
We want honest and good enough code to be shipped. We don't want to spend too many cycles making the code Perfect™.
We use Circle-CI for continuous integration. The master branch is setup to deploy to a staging server on Heroku.
We use Heroku to serve client apps.
The brewhouse-rails-template has a good basic setup.
We use the following addons:
- rollbar for error reporting
- sendgrid to send emails
- newrelic for performance monitoring
- papertrail to get access to the logs
- heroku-redis for sidekiq
We use Cloudflare to get DNS + CDN + free SSL cert.
We setup pg:backups
heroku pg:backups schedule --at '04:00 UTC'
Ruby on Rails
We follow the Airbnb Ruby Styleguide.
We bootstrap apps using our rails template: brewhouse-rails-template.
See Gourmet Service Objects for an introduction.
Whenever an action becomes complex, we extract it from our controller or model and wrap it into a Service.
We namespace services to keep them well organized:
# Bad CreateUser # Good User::Create
Most services use the base service mixin.
They favor raising exceptions on failure over returning
Whenever we can we decouple services from ActiveRecord to make them more "functional". They take primitives in and return primitives back. That follows the "functional core & imperative shell" approach. Your service won't deal with persistence and will be super easy to test!
# Not so good class Post < ActiveRecord::Base def refresh_view_stats! Post::RefreshViewStats.call(post: self) end end # Great! class Post < ActiveRecord::Base def refresh_view_stats! update_attribute(:view, Post::GetViewStats.call(post_public_id: public_id)) end end
Controller actions are responsible for:
- massaging params to pass a clean set of attributes to a service
- calling a service
- redirecting / rendering
Controllers also take care of authentication and authorization.
Authorization via ActiveRecord associations is good enough:
We use Pundit when fine-grained authorization is required.
We create a
LoggedInController that ensures that a user is authenticated. Most controllers inherit from
Models are responsible for associations, scopes, validations and data consistency.
A given model is organized in the following order:
- Queries (ex:
- Commands (ex:
Services and Controllers should not call "where" and "order" methods. Models should provide them with an API that's abstracted from the data-layer:
# bad User.where("created_at > ?", 2.days.ago).order("created_at DESC") # good User.created_after(2.days.ago).order_desc
has_many through instead and define a model for the join table. It gives you access to that join table and to add attributes if needed.
Callbacks can only be used to ensure data integrity (i.e. keeping a
comments_count cache value up to date, touch records, …). They should not be used for business logic.
Encrypt api and oauth tokens using attr_encrypted.
Use erb because our designers are not big fan of slim / haml. :)
Use Page Objects when views become a bit complex. Page Objects methods tell the view what to hide / display.
# Bad @page.profile_completed? # Good @page.display_progress_bar?
Confident Ruby - Robust code
In order to be confident that the app works, makes things easy to debug, and ensures data integrity:
Don't fail silently. Use
case ... when ... else raise "...." whenever the call should not fail.
Inconsistent data (missing fields, duplicate records) leads to bugs that are hard to track down. The following practices don’t require much extra time and can save you from painful debugging sessions:
- wrap multiple create / updates in a SQL transaction
- use foreign keys to prevent orphan records (add
schema_auto_foreign_keysto your Gemfile!)
null: falseon fields that should not be empty. This can save you from writing
validates_presence_ofconstraints for models which are not edited directly by users. It is also pretty good as documentation right in
- use index with
unique: trueon unique fields or combination of fields.
- use index with
has_many through: associations. Rails does not double check that an association does not exist when creating it, so you can easily get multiple associations between the same objects in a join table.
Also see this article: Five practices for Robust Ruby on Rails applications
With RSpec, we mostly test models, services and api interaction. Cucumber scenarios are likely to cover the thin controller layer. Once in a while, you might need to test a complex controller, go for it!
We use FactoryGirl to create models without trying to go too far with it. Each factory should be valid on its own. If you need some crazy custom setup, call Services from your specs to create stuff.
describe blocks to describe the class, method or behaviour. Ex:
describe Subscriber::Add describe "#disable!" describe "sending via MailChimp" describe ".active"
Instance methods are prefixed with a
#, class methods are prefixed with a
context should start with the keyword "when". Ex:
context "when email is invalid" context "when CSV has no content"
rspec-set to speed up the RSpec test suite! Be well aware that
rspec-set leaves orphan records in the database, so most of your tests should not assume that the database is empty. It’s sometimes tricky to test scopes with an non-empty database. Feel free to truncate the database in a before(:all) block if needed.
Write accurate tests to prevent false positives:
# Bad expect(campaign.active.count).to eq 2 # Good expect(campaign.active).to match_array(active_campaign_1, active_campaign_2)
Cucumber scenarios should be high level, and contain a few (5-8) steps. They describe a workflow, not UI interactions. Doing so, we spend more time in step definitions (read capybara & ruby) than step definition and cucumber pattern matching hell.
Define steps with strings rather than regex when possible:
Given "I am logged in" do # ... end Then %|I should be logged in as "$name"| do |name| # ... end
Given steps should rely on factories and services to set up a context. Going through the UI or reusing other steps are likely to slow things down.
By order of preference (because of speed), use
capybara-selenium is slow!
We use rake tasks to trigger recurring jobs.
rake -T counts about 40 tasks by default. We namespace all the application tasks under the
app namespace so that they are easy to find.
We use Heroku Scheduler to run tasks every 10 minutes, hourly and daily. In order to make it easy to update tasks, we create three generic tasks (
app:cron:every_10_minutes) and call other tasks within those.
We setup Heroku Scheduler to call the
In order to prevent a failing task to run subsequent ones, each rake task should queue up a job that Sidekiq will happily take care of.
We have a good experience with jsonapi-resources. It provides a good framework to build a consistent api. Pair it with rspec-api-documentation and apitome and you’ve got a pretty good setup that makes your mobile friends happy. :)
Authentication has been done so far using Basic Auth ("Authentication" header with login:password base64 encoded).
Rails and JS
A couple of good practices here to keep things simple and robust:
- If you don't inject selectors for some reason, prefix classes with
js-or use data attributes.
- Make Rails Remote files just set or append html content that you render from another partial.
Thanks for reading!
Feel free to create issues and pull-requests to share your feedback or improve this document.
This content is licensed under the terms of the MIT license.