An incomplete listing of german government domains
Ruby Makefile Python
Switch branches/tags
Nothing to show
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Permalink
Failed to load latest commit information.
data
.gitignore
Gemfile
Gemfile.lock
LICENSE
Makefile
bundde-behoerden-scraper.rb
datapackage.json
punycode.py
readme.md
wikidata-cities.rb

readme.md

German Government Domains

An incomplete listing of german government domains (and the code for the scraper used to build the list).

You can download the list as a .csv file or view it with github's pretty formatting.

We try to use the same format as the US GSA (example), so the CSV file has a header of Domain Name,Domain Type,Agency,City,State and currently contains government agencies and cities.

Variants

If you only want a subset of the available data, variants filtered by Domain Type are provided:

Why?

There currently isn't a publicly available directory of all the domain names registered by the german government and its agencies. Such a directory would be useful for people looking to get an aggregate view of government websites and how they are hosted. For example, Ben Balter has been doing some great work analyzing the official set of US .gov domains.

This is by no means an official or a complete list. It is intended to be a first step toward a better understanding of how the government is managing its official sites.

What can I do with it?

  • Plug the CSV into 18F/domain-scan to get more data (like HTTPS support) about the domains
  • Check the IPv6 reachability
  • Test if the sites are reachable even without the www. subdomain
  • ...?

How to update

The list is populated by scrapers and static files and merged by a makefile. To run the process yourself, checkout this repository and run:

bundle install
make

After everything ran, you can look into data/domains.csv.

Scrapers and Sources

Contributing

I'd love to have some help with this! Please feel free to create an issue or submit a pull request if you notice something that can be better. Specifically, suggesting additional pages we can scrape and domains that are either not found or have incorrect organization names associated with them would be very helpful.

Ideas

Thanks

Thanks to @esonderegger for the dotmil domains project that served as an template for this repo.