Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

small fix for changelog and version #993

Merged
merged 74 commits into from
Nov 15, 2022
Merged

small fix for changelog and version #993

merged 74 commits into from
Nov 15, 2022

Conversation

Eugene-hu
Copy link
Contributor

@Eugene-hu Eugene-hu commented Nov 15, 2022

The version and changelog did not update properly, manually fixing this

camfairchild and others added 30 commits September 5, 2022 18:31
* add external axon changes

* add defaults for new axon flags

* fix args to axon

* default to internal ip and port if not specified

* add new args and todefaults

* add axon unit tests

* add description for subtensor integration test

* move test to unit test

* create new test file
add/update copyright notices

* don't default to internal ip

* add tests for setting the full_address

* add tests for subtensor.serve w/external axon info

* allow external port config to be None

* switch to mock instead of patch

* fix test mocks

* change mock config create

* fix/add default config

* change asserts add mesage

* fix check call args

* fix mock config set

* only call once

* fix help wording

* should be True
* add equality to None to the balance class

* add tests for the None case
…un (#893)

* added cuda solver

* boost versions to fix pip error

* allow choosing device id

* fix solution check to use keccak

* adds params for cuda and dev_id to register

* list devices by name during selection

* add block number logging

* fix calculation of hashrate

* fix update interval default

* add --TPB arg to register

* add update_interval flag

* switch back to old looping/work structure

* change typing

* device count is a function

* stop early if wallet registered

* add update interval and num proc flag

* add better number output

* optimize multiproc cpu reg
keeping proc until solution

* fix test

* change import to cubit

* fix import and default

* up default
should have default in CLI call

* add comments about params

* fix config var access

* add cubit as extra

* handle stale pow differently
check registration after failure

* restrict number of processes for integration test

* fix stale check

* use wallet.is_registered instead

* attempt to fix test issue

* fix my test

* oops typo

* typo again ugh

* remove print out

* fix partly reg test

* fix if solution None

* fix test?

* fix patch

* add args for cuda to subtensor

* add cuda args to reregister call

* add to wallet register the cuda args

* fix refs and tests

* add for val test also

* fix tests with rereg

* fix patch for tests

* add mock_register to subtensor passed instead

* move register under the check for isregistered

* use patch obj instead

* fit patch object

* fix prompt

* remove unneeded if

* modify POW submit to use rolling submit again

* add backoff to block get from network

* add test for backoff get block

* suppress the dev id flag if not set

* remove dest so it uses first arg

* fix pow submit loop

* move registration status with

* fix max attempts check

* remove status in subtensor.register

* add submit status

* change to neuron get instead

* fix count

* try to patch live display

* fix patch

* .

* separate test cases

* add POWNotStale and tests

* add more test cases for block get with retry

* fix return to None

* fix arg order

* fix indent

* add test to verify solution is submitted

* fix mock call

* patch hex bytes instead

* typo :/

* fix print out for unstake

* fix indexing into mock call

* call indexing

* access dict not with dot

* fix other indent

* add CUDAException for cubit

* up cubit version

* [Feature] ask cuda during btcli run (#890)

* add ask for cuda reg config in btcli run

* suppress unset arg

* [Feature] [cuda solver] multi gpu (#891)

* change diff display out

* remove logging

* check cubit support in the check config

* allow 1 or more devices in flag

* cuda flag should be suppress

* modify how cpu count is found

* make a solver base class

* add a solverbase for CUDA

* use mutli process kernel launching, one per GPU

* move check under dot get accessor

* Feature/cuda solver multi gpu (#892)

* change diff display out

* remove logging

* check cubit support in the check config

* allow 1 or more devices in flag

* cuda flag should be suppress

* modify how cpu count is found

* make a solver base class

* add a solverbase for CUDA

* use mutli process kernel launching, one per GPU

* move check under dot get accessor

* add All gpus specification

* continue trying reg after Stale

* catch for OSX

* dont use qsize

* add test for continue after being stale

* patch get_nowait instead of qsize
…tom link (#915)

* Update old docs link to new one

This change deletes the old gitbooks documentation link and replaces it with the new one.

* fix discord links

Co-authored-by: Mac Thrasher <95183714+quac88@users.noreply.github.com>
prevents downloading from huggingface
* add seed option to regen hotkey

* make seed optional and fix docstring

* add tests for both coldkey and hotkey regen w/seed

* oops, make seed optional

* fix old test, add config.seed
Asserts that randomly instantiated compact_topk encodings can be correctly decoded to recover the original topk_tensor.
Replaces .tensor_split() with block indexing to avoid extra copy operations.
…speed_up_v2' into feature/BIT-574/deserialization_speed_up_v2
* circle ci version update and fix

* Test clean up

* uncomment test and remove specific test

* remove loguru and fix flaky tests

* fix syncing

* removing tokenizer equivalence + some bug fixes

* moving old dataset test
…on_speed_up_v2

[BIT-574] Deserialization speed up (v2)
…ravel-unittest

[BIT-587] Deactivate unravel unit test
…l_topk_token_phrases

[BIT-588] Create topk_tensor on origin device
* local train bug fix

* normalization update

* fix tests

* remove test

* updated normalization

* Naming changes, bug fixes

* subtensor update for max clip

* max weight to a million

* Fixes for ordering and comments

* additional tests

* string fix

* numerical stability and testing updates

* minor update for division by zero

* Naming and spacing fixes

* epsilon update

* small fix
…version (#918)

BIT-582 Adding development workflow documentation and script for bumping the version
* removed ws assumption

* removing check

* never registered

* Fixed sched_getaffinity for mac osx

* Started adding parachain support

* [hot-fix] fix indent again. add test (#907)

fix indent again. add test

* Fixed registration check and first time registration

* Removed old entrypoint list structure

* Fixed unit tests

Co-authored-by: Eugene <etesting007@gmail.com>
Co-authored-by: Ala Shaabana <ala@bittensor.com>
Co-authored-by: Cameron Fairchild <cameron.fairchild@mail.utoronto.ca>
* set allowed receptor to be 0 in validator to not store any receptor

* max_active receptro to 0

* fix
* BIT-582 Adding development workflow documentation and script for bumping the version

* BIT-579 Adding prometheus_client==0.14.1 to requirements

* BIT-579 Removing wandb defaults from sample_configs

* Revert "BIT-579 Removing wandb defaults from sample_configs"

This reverts commit 2940cc7.

* BIT-579 Starting prometheus code. Adding metric_exporter concept/element and its MetricsExporterFactory

* BIT-579 Adding prometheus_client==0.14.1 to requirements

* BIT-579 Removing wandb defaults from sample_configs

* Revert "BIT-579 Removing wandb defaults from sample_configs"

This reverts commit 2940cc7.

* BIT-579 Starting prometheus code. Adding metric_exporter concept/element and its MetricsExporterFactory

* Revert "BIT-579 Starting prometheus code. Adding metric_exporter concept/element and its MetricsExporterFactory"

This reverts commit 8742d7f.

* BIT-579 Adding _prometheus to bittensor

* BIT-579 Adding prometheus code to bittensor/_neuron/text/core_*

* BIT-579 Adding prometheus code to bittensor/_config/config_impl.py. Sends the config to the inprocess prometheus server if it exists.

* BIT-579 Adding prometheus code to bittensor/_axon/*

* BIT-579 Adding prometheus code to bittensor/_dendrite/*

* BIT-579 Fixing syntax error

* BIT-579 Fixing missing import: time

* BIT-579 fixing typo

* BIT-579 fixing test: unit_tests/bittensor_tests/test_neuron.py

Co-authored-by: Unconst <32490803+unconst@users.noreply.github.com>
* adds generate to dendrite

* vune fixes

* extend readme

Co-authored-by: unconst <jake@bittensor.com>
unconst and others added 22 commits October 31, 2022 16:27
* initial commit

* fix manager server no return

* Moving to release

Co-authored-by: unconst <jake@bittensor.com>
Decrease validator moving average window from 20 (alpha=0.05) to 10 (alpha=0.1) steps. This parameter could probably eventually be set to alpha=0.2.

The current 20-step window means that a server model change will take 20 steps * ~250 blocks/epoch * 12 sec = approx. 17 hours to reach full score in the validator neuron stats, because of the moving average slowly weighing in new model performance. 17 hours is probably too long, and it is also likely affecting registration immunity.
* remove test_receptor test

* fix tests

Co-authored-by: unconst <jake@bittensor.com>
* no version checking

* fix integration tests

* remove print

Co-authored-by: Thebes <jake@bittensor.com>
* initial commit

* promo change to axon and dendrite

Co-authored-by: Thebes <jake@bittensor.com>
* remove test_receptor test

* fix tests

* fix valdidator exit

Co-authored-by: unconst <jake@bittensor.com>
…dator_moving_average_window

[BIT-594] Decrease validator moving average window
* Format AuthInterceptor using black

* Parse request metadata as key value pairs

* Use request method to black list calls

* Fix request type provided on backward

* Add type hints

* Refactor signature parsing
* clone the repo to install instead

* no cd

Co-authored-by: Ala Shaabana <shaabana@gmail.com>
(cherry picked from commit 43110cf)
Response serialization/deserialization introduces precision errors that may cause probability sums to exceed permissible boundaries. Now checks to see if precision errors are within established absolute tolerance (atol = 1e-6 currently).

(cherry picked from commit d96b625)
(cherry picked from commit 6dd06f9)
…-nobunaga

[Hotfix] Synapse security update (nobunaga)
Copy link
Contributor

@eduardogr eduardogr left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seems good to me

Remember to merge back into nobunaga aswell. Do that in a different PR

Thanks @Eugene-hu !

@Eugene-hu Eugene-hu merged commit 88ca9ed into master Nov 15, 2022
@Eugene-hu Eugene-hu deleted the version_changelog_fix branch December 14, 2022 16:52
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

9 participants