Skip to content

Geom bar weight #11662

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 3 commits into from
Mar 25, 2022
Merged

Geom bar weight #11662

merged 3 commits into from
Mar 25, 2022

Conversation

ammekk
Copy link
Contributor

@ammekk ammekk commented Mar 24, 2022

Added support for a weight aesthetic in geom_bar. Also made small fix to geom_tile's color.

@danking danking merged commit 07bb22c into hail-is:main Mar 25, 2022
danking pushed a commit to danking/hail that referenced this pull request Mar 26, 2022
* WIP

* small fix to geom tile and added test for weights

* documentation
danking added a commit that referenced this pull request Mar 26, 2022
* :

* Revert ":"

This reverts commit cd7e115.

* splitQuoted function

* dealing with parameters

* WIP

* regexFullMatch complete

* WIP

* WIP

* passing test

* benchmark

* deleted texttablereader

* took out stupid slow stuff I was doing

* WIP

* different handling of missingness

* WIP

* WIP

* Implement lines splitting in staged code

* WIP

* WIP

* cleanup

* WIP

* file_per_partition added to import_lines

* WIP

* WIP

* WIP

* WIP

* WIP

* WIP

* basic structure done

* WIP

* WIP

* partially working

* one test passing

* most test passing

* all test passing

* fixed up style

* fixed up style

* splitQuoted function

* dealing with parameters

* WIP

* regexFullMatch complete

* WIP

* WIP

* passing test

* benchmark

* deleted texttablereader

* took out stupid slow stuff I was doing

* WIP

* different handling of missingness

* WIP

* WIP

* Implement lines splitting in staged code

* WIP

* WIP

* cleanup

* handled error for mis matched lengths of fields

* fixed test

* merge fixes

* merge fixes

* fixed errors

* style fixes

* error fixes and new test

* fix minor bug in tests

* address lint issues

* add some fails, remove some fails

* canonicalize paths

* restore fails to test_error_with_context

* fix

* fix error message

* parameterize width and fail 3072

* fix

* fix minor bug in tests

* restore docs

* test passes and docs warning

* revert cruft

* Remove TextMatrixReader

dead code

* remove xfails

* try to reduce critical path test latency

* Bump minimist from 1.2.5 to 1.2.6 in /batch2/react-batch (#11653)

Bumps [minimist](https://github.com/substack/minimist) from 1.2.5 to 1.2.6.
- [Release notes](https://github.com/substack/minimist/releases)
- [Commits](https://github.com/substack/minimist/compare/1.2.5...1.2.6)

---
updated-dependencies:
- dependency-name: minimist
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* Bump pytest from 6.2.5 to 7.1.1 in /hail/python/dev (#11619)

Bumps [pytest](https://github.com/pytest-dev/pytest) from 6.2.5 to 7.1.1.
- [Release notes](https://github.com/pytest-dev/pytest/releases)
- [Changelog](https://github.com/pytest-dev/pytest/blob/main/CHANGELOG.rst)
- [Commits](pytest-dev/pytest@6.2.5...7.1.1)

---
updated-dependencies:
- dependency-name: pytest
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* [k8s] set HPA target to 2500% (#11654)

I calculated this:

    targetAverageUtilization = limits.cpu / requests.cpu / 2

* [query] Pin jinja2 (#11659)

* [query/service] skip test_spectral_moments_i for now (#11657)

* [query/fs] fix LocalFS when given a single file to list (#11629)

* [batch] Time mark job creating and close batch (#11557)

* [batch] Time mark job creating and close batch

* lint

* [query] Actually allow the service to use the shuffler (#11642)

* Quadruple the branching factor

* Let someone specify the branching factor. Also let them specify how many partitions they'd like to target

* Let's try 64

* Enable ability to turn on hail service shuffle

* Seems to be working

* Killed some debug info

* Less prints

* Fixed problem with intervalAlignAndZipPartitions

* No more calling get on optional range

* Handle empty rows or no partitions

* Put the defaults in HailFeatureFlags, pick less aggressive defaults for testing purposes

* Fix RType use in TableReader

* Addressed Tim's comments

* [ci] Retry image builds (#11666)

* [ci] Retry image builds

* use RETRY_FUNCTION_SCRIPT

* lint

* [docker] fix curl options, use curl in more places (#11649)

We have configured curl to retry so we should always prefer it to wget. I also
fixed that long-standing mistake I made when I added retry-all-errors before
it was supported in our version of curl.

* [infra] Update k8s to 8-core nodes (#11636)

* [infra] Update Azure to use 8-core node pools

* [infra] Move GCP over to 8-core node pools

* [infra] Update dev docs for migrating node pools

* [infra] Upgrade the kubernetes provider

* appease azurerm provider

* make H3 for manual section

* Geom bar weight (#11662)

* WIP

* small fix to geom tile and added test for weights

* documentation

* Print external URL in batch submission log (#11658)

* Check that the pip wheel is less than 100MB, the pypi upload size limit.  (#11592)

* Added wheel size check logic

* python -> python3

* Add du to print out file size

* [batch] Use nginx reverse proxy to terminate TLS for batch-driver (#11638)

* [batch] Add metric for number of scheduling loop invocations

* [batch] Add metric counter for number of database transactions

* [batch] Monitor heavily used queries in scheduling loop

* [utils] Make run_if_changed have a minimum wait of 0.5 seconds

* [internal-gateway] Change batch-driver rate limit to 180rps

* [batch] Use nginx reverse proxy to terminate TLS for batch-driver

* lint

* add pool name label to scheduling loop counter

* add clarifying note

* better clarification

* [query/service] automatically determine JAR url based on SHA (#11645)

* [query/service] automatically determine JAR url based on SHA

* fix

* fix

* blacken

* kill revision argument

* debugging

* include hail_revision in package data

* address comments

* ignore hail_revision

* remove debug

* assert jar url acceptability in front_end

* fix

* revert to command

* Update deployment.yaml

* Update validate.py

* fix assertions on worker and drive

* Update deployment.yaml

* restore use of "command" in driver as well

* [batch] Refactor worker Container class into Image and Container (#11396)

* [batch] Refactor worker Container class into Image, Container, and Task

* address comments

* [query/service] use error id to raise user-friendly errors (#11624)

* [query/service] use error id to raise user-friendly errors

The main change is to the communication protocol between the client and
the driver and between the driver and the worker.

In main, both the driver and the client send messages back to the client
and driver (respectively) by writing to a file in cloud storage. In both
cases, the file (in main) has one of these two structures:

    0x00                  # is_success (False)
    UTF-8 encoded string  # the stack trace

    0x01                  # is_success (True)
    UTF-8 encoded string  # JSON message to send back to the client or driver

In this PR, the success case does not change. The failure case becomes:

    0x00                  # is_sucess (False)
    UTF-8 encoded string  # short message
    UTF-8 encoded string  # expanded message
    4-byte signed integer # error id

The service backend in Python changes to read this and raise the right error
if an error id is present.

I also uncovered three unrelated problems that are fixed in this PR:
1. PlinkVariant needs to be serializable because it is broadcasted.
2. We open an input stream in LoadPlink which ought to be closed, but there is no mechanism to do so in the ServiceBackend. I just ignore it for now. cc: @tpoterba, I'm not sure what the right answer is here.
3. Two uses of the broadcasted file system that should use the ExecuteContext's file system.

* fix issues

* new test passes

* another passing test!

* [hail/fs] fix use of Semaphore in router fs (#11628)

I do not think this ever worked, my bad!

* [query/local] ensure init preserves the backend (#11669)

* [query/local] ensure init preserves the backend

`test_hail_python_local_backend_5` has been testing the spark backend because
`test_init_hail_context_twice` was replacing the LocalBackend with a SparkBackend. This
issue does not affect the service backend.

* fix

* Remove extra argument to init_local

* Fixed init_spark

* Fixed stop for LocalBackend

* Delete fixme comment

* Added appropriate fails_local_backends that were hidden by this init issue

* fix matrix plink reader

* Correctly mark test_ld_score as failing

* fix

* test_naive_coalesce also fails

* try to reduce critical path test latency

* lots more local fails

* fails both

Co-authored-by: John Compitello <j301600@gmail.com>

* [compiler] Simplify SRNGState (#11668)

* 1.5 hours for service backend tests

* one more passing test!

Co-authored-by: ammekk <emma.kelminson001@umb.edu>
Co-authored-by: Tim Poterba <tpoterba@gmail.com>
Co-authored-by: Chris Vittal <chris@vittal.dev>
Co-authored-by: Christopher Vittal <cvittal@broadinstitute.org>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Daniel Goldstein <danielgold95@gmail.com>
Co-authored-by: John Compitello <j301600@gmail.com>
Co-authored-by: ammekk <74081901+ammekk@users.noreply.github.com>
Co-authored-by: Leonhard Gruenschloss <leonhard.gruenschloss@populationgenomics.org.au>
Co-authored-by: jigold <jigold@users.noreply.github.com>
Co-authored-by: Patrick Schultz <pschultz@broadinstitute.org>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants