Releases: ctdk/goiardi
v0.7.2-pre1
A yak shaving pre-release. Removes an extra newline from a debug logging statement, courtesy @spheromak, and addresses some potential race conditions uncovered by the -race
build flag, also suggested by @spheromak. So, thanks for that.
v0.7.1 - Constant Manatee Attacks
Small release with two new small features fixing relatively minor issues that nonetheless deserved attention:
- Add --db-pool-size and --max-connections options for configuring the number
of idle db connections kept around and the maximum number of db connections
to make to the server. It isn't particularly useful if you're not using one
of the SQL backends. - For locally stored cookbook files (which is currently all of them), goiardi
now generates the URL to the resource from the currently configured
hostname. This fixes an issue where if you uploaded a cookbook and then
changed the goairdi server's hostname, the URLs to download cookbooks would
break.
v0.7.0 - Orphans of the Sky
This release is both big and small. It only adds one big new feature, along with a few bug fixes, but for reasons I detail at http://ctdk.github.io/goiardi/blog/2014/07/21/goiardi-version-0-dot-7-0-orphans-of-the-sky/ this release introduces a breaking change in the way complex structures are stored in the database. Read the README and release notes for how to upgrade to this release. That said, this is what is in this release:
- Add /universe API endpoint, per
https://github.com/opscode/chef-rfc/blob/master/rfc014-universe-endpoint.md. - Make file uploading a little more forgiving.
- Make validating some cookbook metadata more forgiving, to bring goiardi's
validations in line with erchef. - Added some functions to make listing all cookbooks and recipes on the
server faster and move the logic into the cookbook package. - Breaking DB change: with both MySQL and Postgres, the way data structures
for cookbooks, nodes, etc. has changed from gob encoding to using JSON. This
obviously breaks existing items in the database, so the following steps must
be followed by users using either SQL backend for data storage:- Export their goiardi server's data with the
-x
flag. - Either revert all changes to the db with sqitch, then redeploy, or drop
the database manually and recreate it from either the sqitch patches or
the full table dump of the release (provided starting with 0.7.0) - Reload the goiardi data with the
-m
flag.
See the README or the godocs for more information.
- Export their goiardi server's data with the
For the first time, if you don't feel like mucking about with sqitch there are schema dumps for both MySQL and Postgres provided in the sql-files/
directory.
I think this release is worth the breaking change, but please please please if you are using MySQL or Postgres for a data store read the notes carefully before upgrading so stuff doesn't break all over you.
Once again, binaries for select architectures are provided for your convenience. They come with no guarantees, but are believed to work. If you don't see your preferred platform here, you'll need to build it yourself. If you want to run goiardi on an architecture that isn't supported by the native Go compiler, you may possibly have success with gccgo.
v0.7.0-pre1 - Please, please read the notes for this prerelease
Adding the universe endpoint ended up being a waaaaay bigger thing than I expected. In order to get universe to work in a timely fashion when all of the cookbooks in supermarket were loaded in, I had to change the encoding used in the database to store complex structures like attributes, metadata, and so forth for things like cookbooks and nodes from gob to JSON. Gob is usually faster than JSON, but in this specific case it isn't. The extreme pain is at least limited to anyone using an SQL backend for this. The README has more information on this change, but the salient part is in this quote from the CHANGELOG:
Breaking DB change: with both MySQL and Postgres, the way data structures
for cookbooks, nodes, etc. has changed from gob encoding to using JSON. This
obviously breaks existing items in the database, so the following steps must
be followed by users using either SQL backend for data storage:
* Export their goiardi server's data with the `-x` flag.
* Either revert all changes to the db with sqitch, then redeploy, or drop
the database manually and recreate it from either the sqitch patches or
the full table dump of the release (provided starting with 0.7.0
* Reload the goiardi data with the `-m` flag.
See the README or the godocs for for information.
That said, it does run better now after this change. Postgres gets more benefits than MySQL does, because it has that JSON type and handy functions for working with JSON, but running goiardi with MySQL gets a boost as well.
v0.6.1-pre1
The first pre-release of goiardi v0.6.1. This update adds the /universe
berks-api endpoint to goiardi that's being discussed for chef-server right now over at chef-boneyard/chef-rfc#19. Assuming all goes well, a formal release should follow soon.
A selection of precompiled binaries are also provided for this release. If you don't see your platform there, you'll need to build it yourself.
v0.6.0 - Order of the Elephant
There's a lot of stuff in this release. The biggest is Postgres joining MySQL as a supported backend database, but there's a lot of other exciting bug fixes and features here. Note that if you use the in-memory data store and save the data store to disk, this update will break save file compatibility. Export your data before upgrading, then re-import it. You may wish to back up the data and index files before upgrading as well.
- Postgres support.
- Fix rebuilding indexes with an SQL backend.
- Fix a bug where in MySQL mode events were being logged twice.
- Fix an annoying chef-pedant error with data bags.
- Event logging methods that are not allowed now return Method Not Allowed
rather than Bad Request. - Switch the logger to a fork that can be built and used with Windows that
excludes syslog when building on Windows. - Add basic syslog support.
- Authentication protocol version 1.2 now supported.
- Add a 'status' param to reporting, so a list of reports return by 'knife
runs' can be narrowed by the status of the chef run (started, success, and
failure). - Fix an action at a distance problem with in-memory mode objects. If this
behavior is still desirable (it seems to be slightly faster than the new way),
it can be turned back on with the --use-unsafe-mem-store flag. This change
DEFINITELY breaks in-mem data file compatibility. If upgrading, export your
data, upgrade goiardi, and reload your data. - Add several new searchable parameters for logged events.
- Add organization_id to all MySQL tables that might need it someday. Orgs are
not used at all, so only the default value of 1 currently makes it to the
database. - Finally ran 'go fmt' on goiardi. It didn't even mess up the long comment
blocks, which was what I was afraid it would do. I also ran golint against
goiardi and took its recommendations where it made sense, which was most
areas except for some involving generated parser code, comments on
GobEncode/Decode, commenting a bunch of identical functions on an interface
in search, and a couple of cases involving make and slices. All in all,
though, the reformatting, linting, and light refactoring has done it good.
Binaries for select architectures are provided for your convenience. They come with no guarantees, but are believed to work. If you don't see your preferred platform here, you'll need to build it yourself. If you want to run goiardi on an architecture that isn't supported by the native Go compiler, you may possibly have success with gccgo.
v0.6.0-pre1
See the CHANGELOG at https://github.com/ctdk/goiardi/blob/0.6.0-prerelease/CHANGELOG for a list of new features and fixes that are coming in the 0.6.0 release. This pre-release includes a selection of preview binaries as well, although not all for all the platforms that goiardi can be built for. They come with no particular guarantees, but should work.
v0.5.2 - Block of Dirt
An incremental release, but with some good stuff:
- Add import/export of goiardi data through a JSON dump.
- Add configuration options to specify the max sizes for objects uploaded to the filestore and for JSON requests from the client.
Also, by popular request, precompiled binaries are being provided for this release. If interested, pick the binary that looks most appropriate for your architecture and download it. At this time Linux, MacOS X, FreeBSD, and Illumos binaries are provided. They were built on Debian wheezy, Mavericks, 9.2, and r151008j (i386) respectively, but may work with other similar platforms as well.
v0.5.2-pre3: Forgot toml tags for config file for the max size options, and update…
…d the sample config file.
v0.5.2-pre2
Update docs with new obj/req size options