Skip to content

Releases: broadinstitute/sparklespray

Release 4.2.1

07 Feb 19:47
@pgm pgm
Compare
Choose a tag to compare
  • Bug fix: "preemptible=n" was being ignored

Release 4.1.0

30 Oct 13:51
@pgm pgm
Compare
Choose a tag to compare

Two noteworthy improvements:

  1. Previously when submitting a job that required a lot of files being localized, it gave users the impression something wasn't working when everything was working as it should. Specifically, if a lot of files needed to be downloaded before the job starts, you'd see a periodic error message like the following:
[09:30:37] [starting tail of log sample.1]
2023-10-30 09:30:42,842 Got error polling log. shutting down log watch: <_InactiveRpcError of RPC that terminated with:
	status = StatusCode.UNKNOWN
	details = "unknown task: sample.1 (Known tasks: )"
	debug_error_string = "{"created":"@1698672642.841831440","description":"Error received from peer ipv4:35.196.161.124:6032","file":"src/core/lib/surface/call.cc","file_line":1061,"grpc_message":"unknown task: sample.1 (Known tasks: )","grpc_status":2}"

This message would repeat until the downloads completed and the job started in earnest. While the error looks concerning, it's actually a harmless warning. Also, there was no visibility into why the job was taking so long to start because users got no feedback other than this warning.

In this release, this has been changed so the error message is gone, and instead users are shown a message like the following while downloads proceed:

[09:33:16] [starting tail of log sample-job.1]
[09:33:18] sparkles: Downloading 212 files...
  1. When submitting a job with multiple tasks (via --params or --seq) I've often wished for an estimate of how long it's expected to take. It now will estimate the completion rate once it has seen at least 10 tasks complete, and then use that to compute an estimate of how long it'll take the remaining tasks to complete. Note: This estimate is only reasonable for situations where tasks are queued up (ie: many more tasks then workers) but I thought this crude estimate was better than nothing.

Download version 4.1.0

Release 4.0.4

18 Oct 20:49
@pgm pgm
Compare
Choose a tag to compare

Bug fixes to sparkles kill stemming from switch to life science API

Also, silenced warnings coming from datastore library

Release 4.0.3

18 Sep 02:58
@pgm pgm
Compare
Choose a tag to compare

Bug fix for sparkles setup: Was still trying to grant access to the old genomics api to the service account instead of the new lifesciences api.

Release 4.0.2

16 Aug 19:46
@pgm pgm
Compare
Choose a tag to compare

Bug fix for google error about projectID in payload.

Release 4.0.0

11 Jul 13:49
@pgm pgm
Compare
Choose a tag to compare
Release 4.0.0 Pre-release
Pre-release
  • Sparkles now uses Google's "Life Science API" (the previous "genomics pipeline API" is deprecated and I've heard will be shut off "soon")
  • Large amount of cleanup and reorganization
  • Can now configure type and size of data volume

Download Release 4.0.0

Release 3.18.0

19 Apr 16:40
@pgm pgm
Compare
Choose a tag to compare
  • Adds new config parameter "max_preemptable_attempts_scale"
    
  • Reworked the "too many node failures" exception to be more robust
    
  • Added support for using persistent disks instead of local-ssd (WIP)
    

Release 3.16.0

22 Nov 18:34
@pgm pgm
Compare
Choose a tag to compare

Changed the default for new instances from preemptible=n to preemptible=y

Release 3.15.0

26 Jul 13:47
@pgm pgm
Compare
Choose a tag to compare

Added new command "grant" for granting rights on other projects to the service account used by sparkles

Release 3.14.0

21 Jul 17:29
@pgm pgm
Compare
Choose a tag to compare

Added new command "sparkles grant" for granting additional rights to sparkles service account