Join GitHub today
GitHub is home to over 28 million developers working together to host and review code, manage projects, and build software together.Sign up
[Request for discussion] How OMB should measure impact of its pilot #118
(I’m Eric, an engineer at 18F, an office in the U.S. General Services Administration (GSA) that provides in-house digital services consulting for the federal government. I’m commenting on behalf of 18F; we’re an open source team and happy to share our thoughts and experiences. This comment represents only the views of 18F, not necessarily those of the GSA or its Chief Information Officer.)
It'd be helpful to hear more from OMB about how they want to evaluate the success of the pilot program. Quantitative metrics for many things are going to be challenging. For example:
To truly measure the effects of the pilot, OMB will need to spend time looking at the context, code, and cost that are specific to any projects they wish to analyze. It's not something OMB can do by scanning through the public internet, or through the software inventories OMB is asking agencies to produce. The best thing OMB might do is to commission a report, by GSA or a cross-agency team or something, that is empowered (and funded) to work with agencies directly to evaluate various situations that OMB's open source pilot helped to empower.
These are our initial reactions, and we'd love to hear from others about the approach OMB might take to evaluation metrics.
It's impossible to measure the cost-saving or innovative effects of the policy's most impactful potential: creating an entirely new, disruptive technology ecosystem.
Open source in government is about much more than efficiency, shipping better code, or engaging the public more openly. Open source is about spurring innovation ecosystems, public/private marketplaces of scientific and engineering ideas, the likes of which were last seen during the space race. Think space pens are cool? Wait until you see what open source has to offer.
The U.S. Federal government is the single-largest purchaser of code in the world. Imagine if, every year, those eleventy billion dollars went not to purpose-built, closed source solutions, but to the many open source projects that government, you, and I use on a daily basis. The ones that already power the basis of our economy from small business websites to multinational corporations' internal systems. Imagine if the size and talent of the open source contributor pool literally doubled over night.
Private-sector firms like Coke and Pepsi, may have a valid reason to shy away from open source in some cases. If core business logic, a dollar spent on open source is a dollar your competitor doesn't need to spend to solve the same problem. But with government, there's no competitor, at least not in the sense of efficient regulation or delivering citizen services. There's no bottom line to hurt, no competitor to outsmart.
At the same time, the types of challenges faced by agencies don't differ much from agency to agency. A FOIA request is a FOIA request. A blog post is a blog post. When the Department of State creates a mechanism for publishing press releases, and the Department of Education uses it, all of a sudden the taxpayer dollar goes twice as far. We just got a 100% return on investment that we would not have otherwise gotten. We're solving the problem once, and solving it everywhere, rather than solving it multiple times, all at the taxpayer's expense.
Why then, is the vast majority of government code, code that could potentially benefit both other agencies and the general public, primarily built on proprietary platforms? Why is such code, by habit, almost always hidden from other agencies and from American taxpayers? Such a shift would be impossible to measure, but would have profound effect on both the public and private technology and innovation ecosystems.
I'd also like to add to @konklone's comment, that open source produces many intangible benefits (good will, transparency, accountability, public confidence, civic engagement) that cannot be easily captured by quantitative metrics.
Additionally, many of open source's quantitative metrics, may be facially misleading. For example, if the experiment group has, on paper, more bugs than the control group, that does not necessarily mean that the software was of a lesser quality, but may more likely indicate, that with more developers reviewing the code, and with the code being used in varying environments, that more bugs were discovered, even if the control group had the same or undocumented more flaws due to its proprietary and secretive nature.
referenced this issue
Apr 11, 2016
referenced this issue
Apr 12, 2016
For #1, it'd be nice if you could mint DOIs, and we could use the system for purposes of software citation -- give us a document to reference in scientific literature to acknowledge software that was used in the research.
(insert disclaimer here about these being personal comments, and not that the of the agency I work for, blah blah blah).