Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

KAFKA-3851: Automate release notes and include links to upgrade notes for release and most recent docs to forward users of older releases to newest docs. #1670

Closed
wants to merge 2 commits into from
Closed
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
67 changes: 67 additions & 0 deletions release_notes.py
@@ -0,0 +1,67 @@
#!/usr/bin/env python

from jira import JIRA
import itertools, sys

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could we add some comments on what the tool does?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added a docstring to the top of the script.

if len(sys.argv) < 2:
print >>sys.stderr, "Usage: release_notes.py <version>"
sys.exit(1)

version = sys.argv[1]
minor_version_dotless = "".join(version.split(".")[:3]) # i.e., 0.10.0 if version == 0.10.0.1

JIRA_BASE_URL = 'https://issues.apache.org/jira'
MAX_RESULTS = 100 # This is constrained for cloud instances so we need to fix this value

def get_issues(jira, query, **kwargs):
"""
Get all issues matching the JQL query from the JIRA instance. This handles expanding paginated results for you. Any additional keyword arguments are forwarded to the JIRA.search_issues call.
"""
results = []
startAt = 0
new_results = None
while new_results == None or len(new_results) == MAX_RESULTS:
new_results = jira.search_issues(query, startAt=startAt, maxResults=MAX_RESULTS, **kwargs)
results += new_results
startAt += len(new_results)
return results

def issue_link(issue):
return "%s/browse/%s" % (JIRA_BASE_URL, issue.key)


if __name__ == "__main__":
apache = JIRA(JIRA_BASE_URL)
issues = get_issues(apache, 'project=KAFKA and fixVersion=%s' % version)
if not issues:
print >>sys.stderr, "Didn't find any issues for the target fix version"
sys.exit(1)

unresolved_issues = [issue for issue in issues if issue.fields.resolution is None]
if unresolved_issues:
for issue in unresolved_issues:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it would be good to include a message giving context before we start listing unresolved issues.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added

print >>sys.stderr, "Unresolved issue: %s %s" % (issue.key, issue_link(issue))
sys.exit(1)

# Get list of (issue type, [issues]) sorted by the issue ID type's , with each subset of issues sorted by their key so they are in
# increasing order of bug #
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would it make sense to list features and improvements before the rest?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To be honest, I considered removing the categories altogether. I actually can't tell if the ordering from JIRA is meaningful. It doesn't correspond to alphabetical, issue type ID, or anything else I could figure out. And unfortunately since JIRA is super configurable, you can't rely on the exact set of issue types.

I've changed the sorting so we'll prioritize the two you mentioned and the rest go in the order of their issue type IDs. We can add further customization if we want to refine the ordering of other types too.

by_group = [(k,sorted(g, key=lambda issue: issue.id)) for k,g in itertools.groupby(sorted(issues, key=lambda issue: issue.fields.issuetype.id), lambda issue: issue.fields.issuetype.name)]

print "<h1>Release Notes - Kafka - Version %s</h1>" % version
print """<p>Below is a summary of the JIRA issues addressed in the %(version)s release of Kafka. For full documentation of the
release, a guide to get started, and information about the project, see the <a href="http://kafka.apache.org/">Kafka
project site</a>.</p>

<p><b>Note about upgrades:</b> Please carefully review the
<a href="http://kafka.apache.org/%(minor)s/documentation.html#upgrade">upgrade documentation</a> for this release thoroughly
before upgrading your cluster. The upgrade notes discuss any critical information about incompatibilities and breaking
changes, performance changes, and any other changes that might impact your production deployment of Kafka.</p>

<p>The documentation for the most recent release can be found at
<a href="http://kafka.apache.org/documentation.html">http://kafka.apache.org/documentation.html</a>.</p>""" % { 'version': version, 'minor': minor_version_dotless }
for itype, issues in by_group:
print "<h2>%s</h2>" % itype
print "<ul>"
for issue in issues:
print '<li>[<a href="%(link)s">%(key)s</a>] - %(summary)s</li>' % {'key': issue.key, 'link': issue_link(issue), 'summary': issue.fields.summary}
print "</ul>"