Skip to content

Unlimited spending limit #384

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged

Conversation

AeonSw4n
Copy link
Contributor

No description provided.

AeonSw4n and others added 9 commits June 30, 2022 07:49
* protect concurrent access to memepool db

* move lock to open db method

* add lock to loadtxns function

* what happens if we just dont dump on the timer

* disable perodic memepool dumper in tests

* manually load txns

* manually set memepool dir

* Update lib/block_view_bitcoin_test.go

Co-authored-by: Lazy Nina <81658138+lazynina@users.noreply.github.com>
@AeonSw4n AeonSw4n marked this pull request as ready for review July 13, 2022 09:17
@AeonSw4n AeonSw4n requested a review from a team as a code owner July 13, 2022 09:17
@@ -167,7 +167,7 @@ func (bav *UtxoView) _connectBitcoinExchange(
if len(txn.PublicKey) != 0 {
return 0, 0, nil, RuleErrorBitcoinExchangeShouldNotHavePublicKey
}
if txn.Signature != nil {
if txn.Signature.Sign != nil {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: unrelated to postgres, but maybe safer to leave the initial nil check like txn.Signature != nil && txn.Signature.Sign != nil.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we're gucci with just txn.Signature.Sign != nil since txn.Signature is not a pointer. Actually, the compiler screams at me for trying txn.Signature != nil.

func AssembleAccessBytesWithMetamaskStrings(derivedPublicKey []byte, expirationBlock uint64,
transactionSpendingLimit *TransactionSpendingLimit, params *DeSoParams) []byte {

encodingString := "DECENTRALIZED SOCIAL\n\n"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: another one unrelated to postgres, maybe there's no other option, but it feels weird to be doing string-interpolation UI in core. more natural would be to pass a struct and do string-interpolation in a client/front-end. maybe not possible w/ metamask, in which case this is fine.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah good observation! So what this does is it actually creates an alternative access bytes format that's equivalent to the traditional access bytes schema of [derivedPublicKey, expirationBlock, SpendingLimit]. The idea is to have metamask-friendly access bytes so that we can display them like this in the Metamask UI https://imgur.com/a/3iU2rmI. When the user then signs this message through Metamask, core will validate the signature on this string.

@@ -489,10 +498,17 @@ func TestBasicTransfer(t *testing.T) {
_ = assert
_ = require

chain, params, db := NewLowDifficultyBlockchain()
postgres := InitializeTestPostgresInstance(t)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IMO this logic should be refactored into some test_utils. Individual tests shouldn't have to know the specifics of a postgres instance initialization. Also ideally we would have one test and not two sister tests like _testBasicTransfer and _testBasicTransferWithPostgres.

@@ -10,6 +10,14 @@ type DbAdapter struct {
snapshot *Snapshot
}

func NewDbAdapter(chain *Blockchain) *DbAdapter {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To match below, maybe this makes more sense w/ a signature like func (chain *Blockchain) GetDbAdapter *DbAdapter {.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

agreed with mf ^


// Config variables for spawning the postgres container.
const CONTAINER_NAME = "test_postgresql"
const LOCAL_PORT = "5433"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is interesting. I assume to get around the fact that the user may be running a local postgres on the default 5432. I personally don't love spinning up a docker postgres from go code like this. Feels like action at a distance. A more common pattern may be to assume the user already has a running postgres instance (see make postgres-start + env vars from before). Or to use embedded postgres which does a lot of this for us for free.

}()
testFunction(t, nil)
<-postgresChannel
testFunction(t, postgres)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IMO running badger + postgres in parallel like this is of limited value. More useful IMO is to run badger tests in a single CI runner and postgres tests in a parallel runner. I.e. you run the entire suite (instead of each test) over badger or postgres in parallel.

const LOCAL_PORT = "5433"
const REMOTE_PORT = "5432"
const POSTGRES_USER = "postgres"
const POSTGRES_PASSWORD = "postgres"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I also think we lose a lot by defining the postgres params in consts like this and dropping support for the PG_URI env var which is a common way to pass in pg params. The idea may be to encapsulate all postgres logic away from the user so they don't even need to know which params are being used, but what if for example, I'm already running a secondary postgres docker container on port 5433 and need to be able to configure the port.


// startTestPostgresContainerAndConnect will spawn a postgres container with a volume mapped to some random directory,
// and port-forwarded 0.0.0.0 : LOCAL_PORT <-> remote : REMOTE_PORT where REMOTE_PORT coincides with the postgres daemon.
func startTestPostgresContainerAndConnect(t *testing.T) *Postgres {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think an embedded postgres solution will solve these problems. Ideally IMO we should decouple the PR changes for an updated postgres testing setup from the changes for derived key unlimited spending limits.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

^^yes

}

// killAllPostgresContainers removes all containers that have the name CONTAINER_NAME.
func killAllPostgresContainers(t *testing.T) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You shouldn't have to kill containers between tests and respawn. You should only have to migrate down and back up. Embedded postgres takes care of this for us.

Copy link
Member

@lazynina lazynina left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The unlimited spending limits updates here seem ok, but the postgres testing changes should be broken into a separate PR. It makes it far hard to review and I don't think we necessarily want all those the postgres testing changes and don't want to hold up this PR as we debate them.

Comment on lines 186 to 190
//var postgresDb *Postgres
//
//if len(os.Getenv("POSTGRES_URI")) > 0 {
// postgresDb = NewPostgres(pg.Connect(ParsePostgresURI(os.Getenv("POSTGRES_URI"))))
//}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree with @mattfoley8 - not really a fan of this change. Can you provide some add'l explanation as to why this is needed?

@@ -10,6 +10,14 @@ type DbAdapter struct {
snapshot *Snapshot
}

func NewDbAdapter(chain *Blockchain) *DbAdapter {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

agreed with mf ^

@@ -4772,9 +4916,173 @@ type TransactionSpendingLimit struct {
// BuyingCreatorPKID || SellingCreatorPKID to number of
// transactions
DAOCoinLimitOrderLimitMap map[DAOCoinLimitOrderLimitKey]uint64

// ===== ENCODER MIGRATION UnlimitedDerivedKeysMigration =====
// IsUnlimited field determines whether this derived key has no spending limit.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
// IsUnlimited field determines whether this derived key has no spending limit.
// IsUnlimited field determines whether this derived key can perform any transaction on behalf of its owner and does not require a no spending limit.

let's be more clear about what IsUnlimited means

lib/postgres.go Outdated
Comment on lines 26 to 29

// Only applies to a docker postgres db.
directory string
containerId string
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think we really need this for a postgres db?

lib/postgres.go Outdated
@@ -2916,3 +2944,50 @@ func (postgres *Postgres) GetNotifications(publicKey string) ([]*PGNotification,

return notifications, nil
}

//
// Postgres Test tooling
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

agreed with MF^

lib/postgres.go Outdated
//

// Drop all tables in the postgres Db.
func (postgres *Postgres) resetDatabase() error {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

agreed with MF^

lib/postgres.go Outdated
return errors.Wrapf(err, "forceKillContainer: Problem removing postgres container")
}

if postgres.directory == "" {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

agreed^


// startTestPostgresContainerAndConnect will spawn a postgres container with a volume mapped to some random directory,
// and port-forwarded 0.0.0.0 : LOCAL_PORT <-> remote : REMOTE_PORT where REMOTE_PORT coincides with the postgres daemon.
func startTestPostgresContainerAndConnect(t *testing.T) *Postgres {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

^^yes

@AeonSw4n AeonSw4n changed the base branch from main to p/public-key-recovery-in-signature-verification August 3, 2022 09:20
@AeonSw4n AeonSw4n changed the base branch from p/public-key-recovery-in-signature-verification to p/spending-limits-metamask-string August 3, 2022 09:24
* Adding Pearl to the list of nodes

* Update nodes.go
@lazynina lazynina changed the base branch from p/spending-limits-metamask-string to p/public-key-recovery-in-signature-verification August 5, 2022 15:27
Comment on lines 200 to 251
if transactionSpendingLimit.IsUnlimited {
if transactionSpendingLimit.GlobalDESOLimit > 0 ||
len(transactionSpendingLimit.TransactionCountLimitMap) > 0 ||
len(transactionSpendingLimit.CreatorCoinOperationLimitMap) > 0 ||
len(transactionSpendingLimit.DAOCoinOperationLimitMap) > 0 ||
len(transactionSpendingLimit.NFTOperationLimitMap) > 0 ||
len(transactionSpendingLimit.DAOCoinLimitOrderLimitMap) > 0 {

return 0, 0, nil, RuleErrorUnlimitedDerivedKeyNonEmptySpendingLimits
}
}
for ccLimitKey, transactionCount := range transactionSpendingLimit.CreatorCoinOperationLimitMap {
if transactionCount == 0 {
delete(newTransactionSpendingLimit.CreatorCoinOperationLimitMap, ccLimitKey)
} else {
newTransactionSpendingLimit.CreatorCoinOperationLimitMap[ccLimitKey] = transactionCount
newTransactionSpendingLimit.IsUnlimited = true
} else {
// TODO: how can we serialize this in a way that we don't have to specify it everytime
// Always overwrite the global DESO limit...
newTransactionSpendingLimit.GlobalDESOLimit = transactionSpendingLimit.GlobalDESOLimit
// Iterate over transaction types and update the counts. Delete keys if the transaction count is zero.
for txnType, transactionCount := range transactionSpendingLimit.TransactionCountLimitMap {
if transactionCount == 0 {
delete(newTransactionSpendingLimit.TransactionCountLimitMap, txnType)
} else {
newTransactionSpendingLimit.TransactionCountLimitMap[txnType] = transactionCount
}
}
}
for daoCoinLimitKey, transactionCount := range transactionSpendingLimit.DAOCoinOperationLimitMap {
if transactionCount == 0 {
delete(newTransactionSpendingLimit.DAOCoinOperationLimitMap, daoCoinLimitKey)
} else {
newTransactionSpendingLimit.DAOCoinOperationLimitMap[daoCoinLimitKey] = transactionCount
for ccLimitKey, transactionCount := range transactionSpendingLimit.CreatorCoinOperationLimitMap {
if transactionCount == 0 {
delete(newTransactionSpendingLimit.CreatorCoinOperationLimitMap, ccLimitKey)
} else {
newTransactionSpendingLimit.CreatorCoinOperationLimitMap[ccLimitKey] = transactionCount
}
}
}
for nftLimitKey, transactionCount := range transactionSpendingLimit.NFTOperationLimitMap {
if transactionCount == 0 {
delete(newTransactionSpendingLimit.NFTOperationLimitMap, nftLimitKey)
} else {
newTransactionSpendingLimit.NFTOperationLimitMap[nftLimitKey] = transactionCount
for daoCoinLimitKey, transactionCount := range transactionSpendingLimit.DAOCoinOperationLimitMap {
if transactionCount == 0 {
delete(newTransactionSpendingLimit.DAOCoinOperationLimitMap, daoCoinLimitKey)
} else {
newTransactionSpendingLimit.DAOCoinOperationLimitMap[daoCoinLimitKey] = transactionCount
}
}
}
for daoCoinLimitOrderLimitKey, transactionCount := range transactionSpendingLimit.DAOCoinLimitOrderLimitMap {
if transactionCount == 0 {
delete(newTransactionSpendingLimit.DAOCoinLimitOrderLimitMap, daoCoinLimitOrderLimitKey)
} else {
newTransactionSpendingLimit.DAOCoinLimitOrderLimitMap[daoCoinLimitOrderLimitKey] = transactionCount
for nftLimitKey, transactionCount := range transactionSpendingLimit.NFTOperationLimitMap {
if transactionCount == 0 {
delete(newTransactionSpendingLimit.NFTOperationLimitMap, nftLimitKey)
} else {
newTransactionSpendingLimit.NFTOperationLimitMap[nftLimitKey] = transactionCount
}
}
for daoCoinLimitOrderLimitKey, transactionCount := range transactionSpendingLimit.DAOCoinLimitOrderLimitMap {
if transactionCount == 0 {
delete(newTransactionSpendingLimit.DAOCoinLimitOrderLimitMap, daoCoinLimitOrderLimitKey)
} else {
newTransactionSpendingLimit.DAOCoinLimitOrderLimitMap[daoCoinLimitOrderLimitKey] = transactionCount
}
}
newTransactionSpendingLimit.IsUnlimited = false
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

don't love how deeply nested this section is. Can we break into a separate function to handle this logic? we're like 3 IFs deep and then doing loops inside it.

Comment on lines 88 to 111

// Verify that expiration block was persisted in the db or is in mempool utxoView
if mempool == nil {
derivedKeyEntry := NewDbAdapter(chain).GetOwnerToDerivedKeyMapping(*NewPublicKey(senderPkBytes), *NewPublicKey(derivedPublicKey))
// If we removed the derivedKeyEntry from utxoView altogether, it'll be nil.
// To pass the tests, we initialize it to a default struct.
if derivedKeyEntry == nil || derivedKeyEntry.isDeleted {
derivedKeyEntry = &DerivedKeyEntry{
*NewPublicKey(senderPkBytes), *NewPublicKey(derivedPublicKey), 0, AuthorizeDerivedKeyOperationValid, nil, transactionSpendingLimit, nil, false}
}
require.Equal(derivedKeyEntry.ExpirationBlock, expirationBlockExpected)
require.Equal(derivedKeyEntry.OperationType, operationTypeExpected)
} else {
utxoView, err := mempool.GetAugmentedUniversalView()
require.NoError(err)
derivedKeyEntry := utxoView._getDerivedKeyMappingForOwner(senderPkBytes, derivedPublicKey)
// If we removed the derivedKeyEntry from utxoView altogether, it'll be nil.
// To pass the tests, we initialize it to a default struct.
if derivedKeyEntry == nil || derivedKeyEntry.isDeleted {
derivedKeyEntry = &DerivedKeyEntry{*NewPublicKey(senderPkBytes), *NewPublicKey(derivedPublicKey), 0, AuthorizeDerivedKeyOperationValid, nil, transactionSpendingLimit, nil, false}
}
require.Equal(derivedKeyEntry.ExpirationBlock, expirationBlockExpected)
require.Equal(derivedKeyEntry.OperationType, operationTypeExpected)
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
// Verify that expiration block was persisted in the db or is in mempool utxoView
if mempool == nil {
derivedKeyEntry := NewDbAdapter(chain).GetOwnerToDerivedKeyMapping(*NewPublicKey(senderPkBytes), *NewPublicKey(derivedPublicKey))
// If we removed the derivedKeyEntry from utxoView altogether, it'll be nil.
// To pass the tests, we initialize it to a default struct.
if derivedKeyEntry == nil || derivedKeyEntry.isDeleted {
derivedKeyEntry = &DerivedKeyEntry{
*NewPublicKey(senderPkBytes), *NewPublicKey(derivedPublicKey), 0, AuthorizeDerivedKeyOperationValid, nil, transactionSpendingLimit, nil, false}
}
require.Equal(derivedKeyEntry.ExpirationBlock, expirationBlockExpected)
require.Equal(derivedKeyEntry.OperationType, operationTypeExpected)
} else {
utxoView, err := mempool.GetAugmentedUniversalView()
require.NoError(err)
derivedKeyEntry := utxoView._getDerivedKeyMappingForOwner(senderPkBytes, derivedPublicKey)
// If we removed the derivedKeyEntry from utxoView altogether, it'll be nil.
// To pass the tests, we initialize it to a default struct.
if derivedKeyEntry == nil || derivedKeyEntry.isDeleted {
derivedKeyEntry = &DerivedKeyEntry{*NewPublicKey(senderPkBytes), *NewPublicKey(derivedPublicKey), 0, AuthorizeDerivedKeyOperationValid, nil, transactionSpendingLimit, nil, false}
}
require.Equal(derivedKeyEntry.ExpirationBlock, expirationBlockExpected)
require.Equal(derivedKeyEntry.OperationType, operationTypeExpected)
}
var derivedKeyEntry *DerivedKeyEntry
// Verify that expiration block was persisted in the db or is in mempool utxoView
if mempool == nil {
derivedKeyEntry := NewDbAdapter(chain).GetOwnerToDerivedKeyMapping(*NewPublicKey(senderPkBytes), *NewPublicKey(derivedPublicKey))
} else {
utxoView, err := mempool.GetAugmentedUniversalView()
require.NoError(err)
derivedKeyEntry := utxoView._getDerivedKeyMappingForOwner(senderPkBytes, derivedPublicKey)
}
// If we removed the derivedKeyEntry from utxoView altogether, it'll be nil.
// To pass the tests, we initialize it to a default struct.
if derivedKeyEntry == nil || derivedKeyEntry.isDeleted {
derivedKeyEntry = &DerivedKeyEntry{*NewPublicKey(senderPkBytes), *NewPublicKey(derivedPublicKey), 0, AuthorizeDerivedKeyOperationValid, nil, transactionSpendingLimit, nil, false}
}
require.Equal(derivedKeyEntry.ExpirationBlock, expirationBlockExpected)
require.Equal(derivedKeyEntry.OperationType, operationTypeExpected)

we can de-dupe this code here.

@@ -508,7 +667,7 @@ func _getAuthorizeDerivedKeyMetadataWithTransactionSpendingLimitAndDerivedPrivat
expirationBlockByte := EncodeUint64(expirationBlock)
accessBytes := append(derivedPublicKey, expirationBlockByte[:]...)

transactionSpendingLimitBytes, err := transactionSpendingLimit.ToBytes()
transactionSpendingLimitBytes, err := transactionSpendingLimit.ToBytes(0)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hmm we should probably be able to test before and after the block height. we can modify the block height at which the fork occurs by updating params.ForkHeights, so we may need to pass a height in here?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I did a very hardcore before - at - after block height test, lmk what you think

Comment on lines +3103 to +3110
transactionSpendingLimit = &TransactionSpendingLimit{
GlobalDESOLimit: 0,
TransactionCountLimitMap: make(map[TxnType]uint64),
CreatorCoinOperationLimitMap: make(map[CreatorCoinOperationLimitKey]uint64),
DAOCoinOperationLimitMap: make(map[DAOCoinOperationLimitKey]uint64),
NFTOperationLimitMap: make(map[NFTOperationLimitKey]uint64),
IsUnlimited: true,
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

define a var for this unlimited spending limit struct?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We only use this like twice in the entire file, I personally prefer it to be more descriptive.

Comment on lines +5070 to +5072
if tsl.IsUnlimited {
str += "FULL ACCESS"
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If it's full access, should we even include the spending limit object? seems a little unnecessary to have both

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Feels like if we don't indicate that it's full access, people might not realize how much permission this key has.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It should certainly say full access, but we should leave out the other pieces of the spending limit object like the maps and the global DESO limit.

@AeonSw4n AeonSw4n changed the title Postgres testing framework & Unlimited spending limit Unlimited spending limit Aug 12, 2022
Base automatically changed from p/public-key-recovery-in-signature-verification to p/spending-limits-metamask-string September 6, 2022 00:16
* added MuteList to MessagingGroupEntry struct

* added TxnType validation in _connectMessagingGroup()

* added MuteList to MessagingGroupEntry (for txns adding new members)

* added muting and unmuting mechanism to _connectMessagingGroup()

* added MuteList to RawEncodeWithoutMetadata

* added MuteList to RawDecodeWithoutMetadata

* moved MuteList to end of MessagingGroupEntry for backwards compatibility

* added clarifying comment for MuteList

* corrected typo

* added MuteList to memberGroupEntry HACK

* changed iii to ii

* added RuleErrorMessagingMemberMuted

* major muting code added (needs cleanup)

* cleanup comments

* fixed for loop error

* deleted unused inline func

* added TODO for making MuteList retrieval more efficient

* fixed test typo

* commented out MuteList from hacked memberGroupEntry for now

* go.mod random change

* fixed bug

* fixed all pre-testing bugs

* FIXED ALL BUGS AND ADDED TESTS

* cleaned up comments

* 33rd waking hour and counting...

* added helpful comment

* fixed unmuting bug

* added unmuting tests and all successful

* code cleanup

* added MessagingGroupOperationMute and MessagingGroupOperationUnmute constants

* replaced more constants

* replaced more constants

* fixed deepEqual to compare byte slices and NOT PublicKeys

* fixed deepEqual to compare byte slices and NOT PublicKeys AGAIN

* added gated condition to have sender and recipient in ExtraData

* added comment

* removed code from _disconnectMessagingGroup

* added blockheight gating for messages muting

* fixed existingEntry.MuteList deep copy bug

* added encoder migration for DeSoV3MessagesMutingMigration

* fixed HUGE testnet bug and migration bug

* fixed muting code positioning

* fixed deep copy bug

* fixed extradata operationtype bug

* fixed redundant if condition

* made constant for MessagingGroupOperationType

* moved contains()

* throwing errors when muting already muted member or unmuting already unmuted member

* made concise

* removed comment

* added super helpful comment

* temporarily changed migration version to pass tests

* FIXED MAJOR ENCODE DECODE BUG

* added hacked entry optimization; fixed txn.PublicKey bug

* removed comment

* changed optimization comment

* added prefix deprecation and replacement code

* added more Deprecation code

* refactored db_utils funcs and created new OptimizedMessagingGroupEntry using better prefix key structure

* fixed refactoring bug; added more tests for muting while blockheight below threshold

* fixed new prefix name

* fixed 2 nits

* cleaned up 'contains' code

* added test; fixed deep equal bug

* added additional unmute test

* fixed deep equal nit

* fixed problematic loop; added test; added RuleError

* added code for groupowner not allowed to mute/unmute herself

* fixed conditional dup; added extra data merging

* deduplicated utxoOpsForTxn

* changed comment

* fixed comment grammar

* added enlightening comments

* added groupowner sender to ganggang in tests

* [stable] Release 2.2.6

* Fix IsNodeArchival flag to include SyncTypeBlockSync

This was causing nodes to reject other nodes as sync peers when they
have --sync-type=blocksync but --hypersync=false even though these nodes
are valid sync peers.

* Simplify connect logic; start making hacked member prefix more user-friendly

* Testing

* More thorough testing

* Temporary fix for newly-added state prefix

* Another fix

* fix encoding

* One more pass

* small rename

* another pass

* Fix txindex and gofmt

* Rename fork height

* Nina review round

* Fix nil utxoview fetch

Co-authored-by: Keshav Maheshwari <km02@bu.edu>
Co-authored-by: lazynina <lazynina84@gmail.com>
Co-authored-by: diamondhands <diamondhands@bitcloutdev.com>
@AeonSw4n AeonSw4n merged commit d0fb5c1 into p/spending-limits-metamask-string Sep 6, 2022
@AeonSw4n AeonSw4n deleted the p/unlimited-spending-limit branch September 6, 2022 00:33
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants