Permalink
Branch: master
Find file Copy path
Fetching contributors…
Cannot retrieve contributors at this time
77 lines (56 sloc) 3.57 KB

Undirected Musings about Ethics

Further reading

Why bother?

Everyone thinks they're doing good most of the time. Ethical codes help guide that sense into alignment with the surrounding social and political context: doing good for whom, why, and with what kinds of caveats.

It's not about engineering, it's about people

An ethical code for software development should not waste too much space talking about engineering practices. Certainly there is value in getting more developers and systems people to follow good engineering practice, but an ethical code should focus on the interaction between trustworthiness, the greater good, the personal good of all the participants in the system, and software itself.

(This comes up in Ethics for Programmers, above.)

It's no good to build a wonderfully-engineered system that is cheap to run and easy to integrate with if it systematically disenfranchises and abuses its users for the benefit of its owners, and that's a problem we actually have via Facebook, Github, Twitter, and numerous others.

Ethical codes are fundamentally extrinsic

Ethical codes exist so that others can judge our behaviour, not so that we can judge our own behaviour.

Ethical codes must be constraining

Ethical codes do not exist in a vacuum. A code that authorizes its adherents to behave in any way they see fit, subject only to their own judgement, is no ethical code at all. We already have that and the results have not been great.

This is important - a meaningful ethical code for software would probably cripple most software business models. An ethical code that prioritizes active consent, for example, completely cripples advertising and analytics, and puts a big roadblock in buyouts like Instagram's. This may well be good for society.

Integrity is not about contracts or legislation

Ethics, personal integrity, and group integrity are tangled together, but modern Western conceptions of group integrity tend to revolve around “does this group break the law or engender lawsuits,” not “does this group act in the best interests of people outside of it.”

Assumptions

I've embedded some of my personal morality into the “ethics” articles in this section, in the absence of a published moral code. Those, obviously, aren't absolute, but you can reason about their validity if you assume that I believe the “end user's” privacy and active consent take priority over the technical cleverness or business value of a software system.

Consent and social software

This has some complicated downstream effects: “active consent” means something you can't handwave away by putting implied consent (for example, to future changes) in an EULA or privacy statement. I haven't written much that calls out this pattern because it's pervasive.

The “end user is the real product” business model most social networks operate on is fundamentally unethical under this code. It will always be more valuable to the “real customers” (advertisers, analytics platforms, law enforcement, and intelligence agencies) for users to be opted into new measurements by default, assuming consent rather than obtaining it.