Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Release 3.6.0 #1023

Merged
merged 112 commits into from
Dec 13, 2022
Merged

Release 3.6.0 #1023

merged 112 commits into from
Dec 13, 2022

Conversation

eduardogr
Copy link
Contributor

@eduardogr eduardogr commented Dec 6, 2022

Release branch for bittensor version 3.6.0

Here we will be adding squashed commits to be released. They will be added next to their PR

Changes between this branch and master: https://github.com/opentensor/bittensor/compare/master..release/3.6.0

Since the last release I merged the release branch squashing the commits, the three-dot diff seen here is not the same than the 2-dot diff. So, we can review the 2-dot diff i've shared before this line to review the release.

PRs included:

camfairchild and others added 30 commits September 5, 2022 18:31
* add external axon changes

* add defaults for new axon flags

* fix args to axon

* default to internal ip and port if not specified

* add new args and todefaults

* add axon unit tests

* add description for subtensor integration test

* move test to unit test

* create new test file
add/update copyright notices

* don't default to internal ip

* add tests for setting the full_address

* add tests for subtensor.serve w/external axon info

* allow external port config to be None

* switch to mock instead of patch

* fix test mocks

* change mock config create

* fix/add default config

* change asserts add mesage

* fix check call args

* fix mock config set

* only call once

* fix help wording

* should be True
* add equality to None to the balance class

* add tests for the None case
…un (#893)

* added cuda solver

* boost versions to fix pip error

* allow choosing device id

* fix solution check to use keccak

* adds params for cuda and dev_id to register

* list devices by name during selection

* add block number logging

* fix calculation of hashrate

* fix update interval default

* add --TPB arg to register

* add update_interval flag

* switch back to old looping/work structure

* change typing

* device count is a function

* stop early if wallet registered

* add update interval and num proc flag

* add better number output

* optimize multiproc cpu reg
keeping proc until solution

* fix test

* change import to cubit

* fix import and default

* up default
should have default in CLI call

* add comments about params

* fix config var access

* add cubit as extra

* handle stale pow differently
check registration after failure

* restrict number of processes for integration test

* fix stale check

* use wallet.is_registered instead

* attempt to fix test issue

* fix my test

* oops typo

* typo again ugh

* remove print out

* fix partly reg test

* fix if solution None

* fix test?

* fix patch

* add args for cuda to subtensor

* add cuda args to reregister call

* add to wallet register the cuda args

* fix refs and tests

* add for val test also

* fix tests with rereg

* fix patch for tests

* add mock_register to subtensor passed instead

* move register under the check for isregistered

* use patch obj instead

* fit patch object

* fix prompt

* remove unneeded if

* modify POW submit to use rolling submit again

* add backoff to block get from network

* add test for backoff get block

* suppress the dev id flag if not set

* remove dest so it uses first arg

* fix pow submit loop

* move registration status with

* fix max attempts check

* remove status in subtensor.register

* add submit status

* change to neuron get instead

* fix count

* try to patch live display

* fix patch

* .

* separate test cases

* add POWNotStale and tests

* add more test cases for block get with retry

* fix return to None

* fix arg order

* fix indent

* add test to verify solution is submitted

* fix mock call

* patch hex bytes instead

* typo :/

* fix print out for unstake

* fix indexing into mock call

* call indexing

* access dict not with dot

* fix other indent

* add CUDAException for cubit

* up cubit version

* [Feature] ask cuda during btcli run (#890)

* add ask for cuda reg config in btcli run

* suppress unset arg

* [Feature] [cuda solver] multi gpu (#891)

* change diff display out

* remove logging

* check cubit support in the check config

* allow 1 or more devices in flag

* cuda flag should be suppress

* modify how cpu count is found

* make a solver base class

* add a solverbase for CUDA

* use mutli process kernel launching, one per GPU

* move check under dot get accessor

* Feature/cuda solver multi gpu (#892)

* change diff display out

* remove logging

* check cubit support in the check config

* allow 1 or more devices in flag

* cuda flag should be suppress

* modify how cpu count is found

* make a solver base class

* add a solverbase for CUDA

* use mutli process kernel launching, one per GPU

* move check under dot get accessor

* add All gpus specification

* continue trying reg after Stale

* catch for OSX

* dont use qsize

* add test for continue after being stale

* patch get_nowait instead of qsize
…tom link (#915)

* Update old docs link to new one

This change deletes the old gitbooks documentation link and replaces it with the new one.

* fix discord links

Co-authored-by: Mac Thrasher <95183714+quac88@users.noreply.github.com>
prevents downloading from huggingface
* add seed option to regen hotkey

* make seed optional and fix docstring

* add tests for both coldkey and hotkey regen w/seed

* oops, make seed optional

* fix old test, add config.seed
Asserts that randomly instantiated compact_topk encodings can be correctly decoded to recover the original topk_tensor.
Replaces .tensor_split() with block indexing to avoid extra copy operations.
…speed_up_v2' into feature/BIT-574/deserialization_speed_up_v2
* circle ci version update and fix

* Test clean up

* uncomment test and remove specific test

* remove loguru and fix flaky tests

* fix syncing

* removing tokenizer equivalence + some bug fixes

* moving old dataset test
…on_speed_up_v2

[BIT-574] Deserialization speed up (v2)
…ravel-unittest

[BIT-587] Deactivate unravel unit test
…l_topk_token_phrases

[BIT-588] Create topk_tensor on origin device
* local train bug fix

* normalization update

* fix tests

* remove test

* updated normalization

* Naming changes, bug fixes

* subtensor update for max clip

* max weight to a million

* Fixes for ordering and comments

* additional tests

* string fix

* numerical stability and testing updates

* minor update for division by zero

* Naming and spacing fixes

* epsilon update

* small fix
…version (#918)

BIT-582 Adding development workflow documentation and script for bumping the version
* removed ws assumption

* removing check

* never registered

* Fixed sched_getaffinity for mac osx

* Started adding parachain support

* [hot-fix] fix indent again. add test (#907)

fix indent again. add test

* Fixed registration check and first time registration

* Removed old entrypoint list structure

* Fixed unit tests

Co-authored-by: Eugene <etesting007@gmail.com>
Co-authored-by: Ala Shaabana <ala@bittensor.com>
Co-authored-by: Cameron Fairchild <cameron.fairchild@mail.utoronto.ca>
* set allowed receptor to be 0 in validator to not store any receptor

* max_active receptro to 0

* fix
* BIT-582 Adding development workflow documentation and script for bumping the version

* BIT-579 Adding prometheus_client==0.14.1 to requirements

* BIT-579 Removing wandb defaults from sample_configs

* Revert "BIT-579 Removing wandb defaults from sample_configs"

This reverts commit 2940cc7.

* BIT-579 Starting prometheus code. Adding metric_exporter concept/element and its MetricsExporterFactory

* BIT-579 Adding prometheus_client==0.14.1 to requirements

* BIT-579 Removing wandb defaults from sample_configs

* Revert "BIT-579 Removing wandb defaults from sample_configs"

This reverts commit 2940cc7.

* BIT-579 Starting prometheus code. Adding metric_exporter concept/element and its MetricsExporterFactory

* Revert "BIT-579 Starting prometheus code. Adding metric_exporter concept/element and its MetricsExporterFactory"

This reverts commit 8742d7f.

* BIT-579 Adding _prometheus to bittensor

* BIT-579 Adding prometheus code to bittensor/_neuron/text/core_*

* BIT-579 Adding prometheus code to bittensor/_config/config_impl.py. Sends the config to the inprocess prometheus server if it exists.

* BIT-579 Adding prometheus code to bittensor/_axon/*

* BIT-579 Adding prometheus code to bittensor/_dendrite/*

* BIT-579 Fixing syntax error

* BIT-579 Fixing missing import: time

* BIT-579 fixing typo

* BIT-579 fixing test: unit_tests/bittensor_tests/test_neuron.py

Co-authored-by: Unconst <32490803+unconst@users.noreply.github.com>
* adds generate to dendrite

* vune fixes

* extend readme

Co-authored-by: unconst <jake@bittensor.com>
eduardogr and others added 3 commits November 25, 2022 22:34
* Modifying Dockerfile to build bittensor from repository version and not from github (#1016)

* Updating version to 3.5.1

* CircleCI check to check that version was updated
* Adding 3.5.0 CHANGELOG and little fix for release script

* [hotfix] pin scalecodec lower (#1013)

Co-authored-by: Cameron Fairchild <cameron@opentensor.ai>
@eduardogr eduardogr changed the base branch from master to nobunaga December 6, 2022 17:39
@eduardogr eduardogr changed the base branch from nobunaga to master December 6, 2022 17:40
@eduardogr eduardogr changed the base branch from master to nobunaga December 6, 2022 18:14
@eduardogr eduardogr changed the base branch from nobunaga to master December 6, 2022 18:17
Removing github workflow .github/workflows/docker_image_push.yml
camfairchild and others added 3 commits December 6, 2022 14:58
* fix max stake for a single key

* use kwarg

* use tuple unpack

* add tests for max stake fixes

* fix test mock

* change testcase name

* undo rename oops

* grab amount kwarg instead

* add comment/assert about single hk

* dont remove synapse all test

* fix accessor

Co-authored-by: Unconst <32490803+unconst@users.noreply.github.com>
mention balance if not no prompt

Co-authored-by: Unconst <32490803+unconst@users.noreply.github.com>
* Drop static header check

* Add v2 signature format

* Enforce nonce monotonicity

* Store only wallet hotkey

* Simplify signature check

* Update neuron parameters on mismatch

* Add receptor signature format test
* Pinning requirements versions and creating make commands for requirements

* Extracting development requirements in a separated file, adding a cubit requirements file and providing more make commands. Requirements are now placed within a new directory 'requirements'

* changing clean-venv make command

* Fixing circleci and github workflow

* Using pandas==1.3.5

* Using numpy==1.21.6

* Pinning requirements versions in requirements/prod.txt

* Using torch==1.12
opentaco and others added 3 commits December 9, 2022 20:48
* Compute scaling law on EMA loss

The neural language model scaling law is typically meant to be computed on a loss averaged over the entire training sample. Currently it is computed within-batch only, which frequently sees losses below 1.69 the of natural entropy of text.

Here we now compute the scaling law and the resultant effective number of model parameters on the exponentially moving average loss for a server, which should greatly improve the definition of the result.

* Convert to tensor for calcs

* Ascending sort loss tables

* Add top and bottom weights to validator table

* Add top and bottom weights to validator table

* Add top and bottom weights to validator table

* Change mark uids in weights table

* Update scaling law powers each epoch

* Fix neuron.ip_version
Update power from subtensor always

Since self.config.nucleus.scaling_law_power is updated from default -1 at nucleus init, the condition here at epoch start needs to be removed and has to update from subtensor always.
Copy link
Collaborator

@camfairchild camfairchild left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

See comment on setup.py @eduardogr

setup.py Outdated Show resolved Hide resolved
@eduardogr eduardogr merged commit 0232ede into master Dec 13, 2022
@unconst unconst deleted the release/3.6.0 branch March 27, 2024 20:05
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

9 participants