From 4193c2b53b970313c18ba25ee8d8c3de9944a8d0 Mon Sep 17 00:00:00 2001 From: Xhens Basha Date: Mon, 1 Jun 2020 17:16:51 +0200 Subject: [PATCH] [ASHF] Syncing Fork (#29) * Address more concerns highlighted by linters These changes remove dead code, add error checks, and use assign unused variables to the unnamed variable `_`. Signed-off-by: Matthew Sykes * Fixed write_first_app.rst typo Signed-off-by: pratikpatil024 * FAB-17840 Ch.Part.API: Join channel REST handler (#1305) * FAB-17840 Ch.Part.API: Join channel REST handler Implement a handler for a POST request to join a channel. Here we support requests carrying application/json, support for additional Content-Type will be added later. Signed-off-by: Yoav Tock Change-Id: I8d09e3cba09842f2adc47fb60161eee814e33d31 * Review: simplify code Signed-off-by: Yoav Tock Change-Id: Iee1e8b66cb77f64b9762dee8f85304958081e0fe * Review comments: simplify code - extract error handling to separate method - extract channelID extraction to separate method, reuse - test Accept header allowed range - better comments Signed-off-by: Yoav Tock Change-Id: I11639a3f159694019e521e180e7fd5fadb42cb4f * Review comments: support for multipart/form-data Signed-off-by: Yoav Tock Change-Id: Ic5a0307f56be5561f45910a02892dc7e7b9554d1 * Review comments: remove support for application/json Signed-off-by: Yoav Tock Change-Id: I38bb3564a0e6482b7bf9dca25bd8424e6db2ac95 * Spelling: s/chainocde/chaincode/g Signed-off-by: Matthew Sykes * Retire dormant Fabric maintainers Retire maintainers that have been inactive for the last 3 months: Jonathan Levi Srinivasan Muralidharan Note: "A maintainer removed for inactivity should be restored following a sustained resumption of contributions and reviews (a month or more) demonstrating a renewed commitment to the project." Signed-off-by: David Enyeart * add NOTICE file recommended per ASF and LF policy Signed-off-by: Brett Logan * Add NOTICE to license ignore list Signed-off-by: Brett Logan * Fix some typo in docs This CR fixes the following some typo in docs. - chainocode -> chaincode - chainode -> chaincode - lifecyle -> lifecycle - Hyperlegder -> Hyperledger - chanel -> channel - scructured -> structured - demostrate -> demonstrate - certficates -> certificates - how how -> how - the the -> the - a a -> a - thefollowing -> the following Signed-off-by: Nao Nishijima * Remove txmgr interface This commit removes the interface TxMgr, as there is a single implementation (i.e., LockbasedTxmgr). All the dependent packages are able to use this single implementation directly. The only exception is the validation package that causes a circular dependency. the validation package uses only a single function from this interface (NewTxSimulator). The validation package uses this function for getting accessing to the TxSimulator, in order to allow for the execution a post-order transaction (only channel config transaction in the current code). For meeting this need, this commit now defines a local interface with a single function in the validation package itself. Signed-off-by: manish * FAB-17841: Ch.Part.API: Remove channel REST handler (#1330) Handle DELETE request to remove a channel. - Resolve the removeStorage flag from config & query. - If system-channel exists - reject with StatusMethodNotAllowed and Allow header set to GET. - If channel does not exist - reject with StatusNotFound. - If successs - respond with StatusNoContent. Signed-off-by: Yoav Tock Change-Id: I78581df3c7f0cb99007edddc83eb7a6dca5eba07 * Update commercial paper doc to use enrollUser.js Signed-off-by: NIKHIL E GUPTA * Use protolator from fabric-config Signed-off-by: Matthew Sykes * Revert "Bump viper version to the last working commit" This reverts commit 5ad0a4f79c448539e722e9b94da78c543a274bf8. Signed-off-by: Brett Logan * _lifecycle ignore previous build failure during install (#1280) When a chaincode build previously failed while installing the chaincode, _lifecycle should ignore the cached error and attempt to rebuild the chaincode. This is because we should assume the end user knows something about why the build may succeed on retry if they're reattempting an install. Also, update integration tests to not care about exit status of chaincode installs (since reinstalls now error). FAB-17907 Signed-off-by: Will Lahti * [FAB-17927] Add AllChaincodesInfo to DeployedChaincodeInfoProvider (#1331) Implement AllChaincodesInfo to query ledger and return chaincode info for all the deployed chaincode (both new lifecycle and legacy chaincodes) on a channel. This function is needed for ledger checkpointing and deletion of channel-specific databases in statecouchdb. Signed-off-by: Wenjian Qiao * [FAB-17471] Fix OrdererType key to correct line SampleInsecureKafka profile tries to change consensus type to kafka using OrdererType. But it was written in the line above "<<: *OrdererDefaults". That causes the overwrite consensus type again. This CR moves OrdererType to the correct line. As a result, this CR can generate the correct ordererer type's genesis block. Signed-off-by: Nao Nishijima * [FAB-17059] Add missing mspID when init MembershipProvider. Signed-off-by: Hongbin Mao * fsync and do not generate empty files in snapshots (#1345) This commit fixes following issues - Perform sync on the snapshot files - Does not generate empty files - Use filepath instead of path package Signed-off-by: manish * fix infinite loop during full range query (#1347) We use a single scanner for achieving both paginated and non-paginated range query. We have internalQueryLimit and pageSize. For each _all_docs?startKey="XXX"&endKey="YYY" REST API call to CouchDB, we fetch atmost internalQueryLimit number of records only by appending limit=internalQueryLimit. When the requested pageSize is higher than the internalQueryLimit or the total number of records present in the given range is higher than the internalQueryLimit, iterator would execute the query again once the records fetched in the first cycle is consumed and so on. In order to do that, after an execution of the REST API in each cycle, it updates the initially passed startKey to the nextStartKey. If there is no nextStartKey, it is set to the endKey. Currently, when the nextStartKey and the endKey are the same, we still run one REST API call which is actually not needed as we always set inclusive_end=false. However, this causes infinite loop in a particular case. When we want to retrieve all the records, we would pass an empty string as the startKey and the endKey. When the startKey is an empty string, the REST API call would become _all_docs?endKey="YYY". When both are empty, it would become _all_docs Given that we set startKey to endKey when there is no nextStartKey and still execute one REST API call, it gets into infinite loop by fetching all records again and again. We avovid this infinite loop by setting scanner.exhausted = true whe the startKey and endKey become the same when there is no nextStartKey Signed-off-by: senthil * Remove unnecessary extension of osn (#1351) In the integration test touched by this patch, cert file is read to remove and add node. It can be easily achieved by reading file bytes directly, without extending network to grab intermediate object. Signed-off-by: Jay Guo * [FAB-17935] Change unnecessary warning log line to debug in gossip (#1350) privdata Signed-off-by: Danny Cao * [FAB-17900] Update BCCSP.PKCS11.Pin in examples - The Pin value must be quoted when specified in the yaml Signed-off-by: Tiffany Harris * [FAB-17900] Fixes numeric env variable override bug Signed-off-by: Tiffany Harris * Remove s390x, powerpc64le from RELEASE_PLATFORMS - Release automation only creates amd64 binaries for darwin, linux, and windows. - Continuous integration no longer runs on powerpc64le or s390x Also remove stale build tags related to plugins and race detection for old versions of go, s390x, and ppc64le. Signed-off-by: Matthew Sykes * Backfill test for BCCSP environment overrides... ... and consistently use SW as the key for `SwOpts` in the configuration structures. Right now tags for mapstructure and JSON do not match the tags for YAML yet our sample configuration documents (in YAML) use `SW`. Signed-off-by: Matthew Sykes * Updates in master for v2.1.1 release Update master doc and bootstrap script for v2.1.1 release. Signed-off-by: David Enyeart Co-authored-by: Matthew Sykes Co-authored-by: pratikpatil024 Co-authored-by: Yoav Tock Co-authored-by: Matthew Sykes Co-authored-by: David Enyeart Co-authored-by: Christopher Ferris Co-authored-by: Brett Logan Co-authored-by: Nao Nishijima Co-authored-by: manish Co-authored-by: NIKHIL E GUPTA Co-authored-by: Will Lahti Co-authored-by: Wenjian Qiao Co-authored-by: Hongbin Mao Co-authored-by: Senthil Nathan N Co-authored-by: Jay Guo Co-authored-by: Danny Cao Co-authored-by: Tiffany Harris --- Makefile | 2 +- bccsp/factory/nopkcs11.go | 2 +- bccsp/factory/pkcs11.go | 2 +- bccsp/pkcs11/impl.go | 8 +- common/ledger/blkstorage/blockindex.go | 24 +-- common/ledger/blkstorage/blockindex_test.go | 33 ++-- common/ledger/snapshot/file.go | 3 + common/viperutil/config_test.go | 59 ++++++-- common/viperutil/config_util.go | 3 +- core/chaincode/platforms/golang/platform.go | 1 - core/common/privdata/membershipinfo.go | 2 +- core/common/privdata/membershipinfo_test.go | 19 ++- core/handlers/library/race_test.go | 2 - core/handlers/library/registry_plugin_test.go | 3 - core/ledger/confighistory/mgr.go | 26 ++-- core/ledger/confighistory/mgr_test.go | 22 ++- .../txmgmt/privacyenabledstate/snapshot.go | 83 +++++----- .../privacyenabledstate/snapshot_test.go | 60 +++++--- .../statedb/statecouchdb/statecouchdb.go | 14 +- .../statedb/statecouchdb/statecouchdb_test.go | 142 ++++++++++++++++++ docs/source/hsm.md | 4 +- docs/source/install.rst | 4 +- docs/source/whatsnew.rst | 1 + gossip/privdata/coordinator.go | 2 +- integration/raft/config_test.go | 18 +-- internal/peer/common/common.go | 2 +- sampleconfig/configtx.yaml | 2 +- scripts/bootstrap.sh | 6 +- scripts/run-unit-tests.sh | 5 - 29 files changed, 398 insertions(+), 156 deletions(-) diff --git a/Makefile b/Makefile index a3b60f3fc65..8b83aa3cd2e 100644 --- a/Makefile +++ b/Makefile @@ -81,7 +81,7 @@ GO_TAGS ?= RELEASE_EXES = orderer $(TOOLS_EXES) RELEASE_IMAGES = baseos ccenv orderer peer tools -RELEASE_PLATFORMS = darwin-amd64 linux-amd64 linux-ppc64le linux-s390x windows-amd64 +RELEASE_PLATFORMS = darwin-amd64 linux-amd64 windows-amd64 TOOLS_EXES = configtxgen configtxlator cryptogen discover idemixgen peer pkgmap.configtxgen := $(PKGNAME)/cmd/configtxgen diff --git a/bccsp/factory/nopkcs11.go b/bccsp/factory/nopkcs11.go index 09f52278e15..7896cf2bb0d 100644 --- a/bccsp/factory/nopkcs11.go +++ b/bccsp/factory/nopkcs11.go @@ -18,7 +18,7 @@ const pkcs11Enabled = false // FactoryOpts holds configuration information used to initialize factory implementations type FactoryOpts struct { ProviderName string `mapstructure:"default" json:"default" yaml:"Default"` - SwOpts *SwOpts `mapstructure:"SW,omitempty" json:"SW,omitempty" yaml:"SwOpts"` + SwOpts *SwOpts `mapstructure:"SW,omitempty" json:"SW,omitempty" yaml:"SW,omitempty"` } // InitFactories must be called before using factory interfaces diff --git a/bccsp/factory/pkcs11.go b/bccsp/factory/pkcs11.go index 01ec38827cd..a0a1932dac2 100644 --- a/bccsp/factory/pkcs11.go +++ b/bccsp/factory/pkcs11.go @@ -19,7 +19,7 @@ const pkcs11Enabled = false // FactoryOpts holds configuration information used to initialize factory implementations type FactoryOpts struct { ProviderName string `mapstructure:"default" json:"default" yaml:"Default"` - SwOpts *SwOpts `mapstructure:"SW,omitempty" json:"SW,omitempty" yaml:"SwOpts"` + SwOpts *SwOpts `mapstructure:"SW,omitempty" json:"SW,omitempty" yaml:"SW,omitempty"` Pkcs11Opts *pkcs11.PKCS11Opts `mapstructure:"PKCS11,omitempty" json:"PKCS11,omitempty" yaml:"PKCS11"` } diff --git a/bccsp/pkcs11/impl.go b/bccsp/pkcs11/impl.go index 519f20429c0..11a9c222276 100644 --- a/bccsp/pkcs11/impl.go +++ b/bccsp/pkcs11/impl.go @@ -226,17 +226,13 @@ func (csp *impl) Decrypt(k bccsp.Key, ciphertext []byte, opts bccsp.DecrypterOpt // This is a convenience function. Useful to self-configure, for tests where usual configuration is not // available func FindPKCS11Lib() (lib, pin, label string) { - //FIXME: Till we workout the configuration piece, look for the libraries in the familiar places lib = os.Getenv("PKCS11_LIB") if lib == "" { pin = "98765432" label = "ForFabric" possibilities := []string{ - "/usr/lib/softhsm/libsofthsm2.so", //Debian - "/usr/lib/x86_64-linux-gnu/softhsm/libsofthsm2.so", //Ubuntu - "/usr/lib/s390x-linux-gnu/softhsm/libsofthsm2.so", //Ubuntu - "/usr/lib/powerpc64le-linux-gnu/softhsm/libsofthsm2.so", //Power - "/usr/local/Cellar/softhsm/2.5.0/lib/softhsm/libsofthsm2.so", //MacOS + "/usr/lib/softhsm/libsofthsm2.so", //Debian + "/usr/lib/x86_64-linux-gnu/softhsm/libsofthsm2.so", //Ubuntu } for _, path := range possibilities { if _, err := os.Stat(path); !os.IsNotExist(err) { diff --git a/common/ledger/blkstorage/blockindex.go b/common/ledger/blkstorage/blockindex.go index 1cec440200b..2658544e721 100644 --- a/common/ledger/blkstorage/blockindex.go +++ b/common/ledger/blkstorage/blockindex.go @@ -9,7 +9,7 @@ package blkstorage import ( "bytes" "fmt" - "path" + "path/filepath" "unicode/utf8" "github.com/golang/protobuf/proto" @@ -260,13 +260,6 @@ func (index *blockIndex) exportUniqueTxIDs(dir string, newHashFunc snapshot.NewH return nil, ErrAttrNotIndexed } - // create the data file - dataFile, err := snapshot.CreateFile(path.Join(dir, snapshotDataFileName), snapshotFileFormat, newHashFunc) - if err != nil { - return nil, err - } - defer dataFile.Close() - dbItr := index.db.GetIterator([]byte{txIDIdxKeyPrefix}, []byte{txIDIdxKeyPrefix + 1}) defer dbItr.Release() if err := dbItr.Error(); err != nil { @@ -275,6 +268,8 @@ func (index *blockIndex) exportUniqueTxIDs(dir string, newHashFunc snapshot.NewH var previousTxID string var numTxIDs uint64 = 0 + var dataFile *snapshot.FileWriter + var err error for dbItr.Next() { if err := dbItr.Error(); err != nil { return nil, errors.Wrap(err, "internal leveldb error while iterating for txids") @@ -288,19 +283,30 @@ func (index *blockIndex) exportUniqueTxIDs(dir string, newHashFunc snapshot.NewH continue } previousTxID = txID + if numTxIDs == 0 { // first iteration, create the data file + dataFile, err = snapshot.CreateFile(filepath.Join(dir, snapshotDataFileName), snapshotFileFormat, newHashFunc) + if err != nil { + return nil, err + } + defer dataFile.Close() + } if err := dataFile.EncodeString(txID); err != nil { return nil, err } numTxIDs++ } + if dataFile == nil { + return nil, nil + } + dataHash, err := dataFile.Done() if err != nil { return nil, err } // create the metadata file - metadataFile, err := snapshot.CreateFile(path.Join(dir, snapshotMetadataFileName), snapshotFileFormat, newHashFunc) + metadataFile, err := snapshot.CreateFile(filepath.Join(dir, snapshotMetadataFileName), snapshotFileFormat, newHashFunc) if err != nil { return nil, err } diff --git a/common/ledger/blkstorage/blockindex_test.go b/common/ledger/blkstorage/blockindex_test.go index 18098b5dc6b..8cfa73d2ca8 100644 --- a/common/ledger/blkstorage/blockindex_test.go +++ b/common/ledger/blkstorage/blockindex_test.go @@ -12,7 +12,7 @@ import ( "hash" "io/ioutil" "os" - "path" + "path/filepath" "testing" "github.com/hyperledger/fabric-protos-go/common" @@ -270,20 +270,27 @@ func TestExportUniqueTxIDs(t *testing.T) { defer blkfileMgrWrapper.close() blkfileMgr := blkfileMgrWrapper.blockfileMgr - bg, gb := testutil.NewBlockGenerator(t, "myChannel", false) - blkfileMgr.addBlock(gb) - testSnapshotDir := testPath() defer os.RemoveAll(testSnapshotDir) + // empty store generates no output + fileHashes, err := blkfileMgr.index.exportUniqueTxIDs(testSnapshotDir, testNewHashFunc) + require.NoError(t, err) + require.Empty(t, fileHashes) + files, err := ioutil.ReadDir(testSnapshotDir) + require.NoError(t, err) + require.Len(t, files, 0) + // add genesis block and test the exported bytes + bg, gb := testutil.NewBlockGenerator(t, "myChannel", false) + blkfileMgr.addBlock(gb) configTxID, err := protoutil.GetOrComputeTxIDFromEnvelope(gb.Data.Data[0]) require.NoError(t, err) - fileHashes, err := blkfileMgr.index.exportUniqueTxIDs(testSnapshotDir, testNewHashFunc) + fileHashes, err = blkfileMgr.index.exportUniqueTxIDs(testSnapshotDir, testNewHashFunc) require.NoError(t, err) verifyExportedTxIDs(t, testSnapshotDir, fileHashes, configTxID) - os.Remove(path.Join(testSnapshotDir, snapshotDataFileName)) - os.Remove(path.Join(testSnapshotDir, snapshotMetadataFileName)) + os.Remove(filepath.Join(testSnapshotDir, snapshotDataFileName)) + os.Remove(filepath.Join(testSnapshotDir, snapshotMetadataFileName)) // add block-1 and test the exported bytes block1 := bg.NextBlockWithTxid( @@ -300,8 +307,8 @@ func TestExportUniqueTxIDs(t *testing.T) { fileHashes, err = blkfileMgr.index.exportUniqueTxIDs(testSnapshotDir, testNewHashFunc) require.NoError(t, err) verifyExportedTxIDs(t, testSnapshotDir, fileHashes, "txid-1", "txid-2", "txid-3", configTxID) //"txid-1" appears once, Txids appear in radix sort order - os.Remove(path.Join(testSnapshotDir, snapshotDataFileName)) - os.Remove(path.Join(testSnapshotDir, snapshotMetadataFileName)) + os.Remove(filepath.Join(testSnapshotDir, snapshotDataFileName)) + os.Remove(filepath.Join(testSnapshotDir, snapshotMetadataFileName)) // add block-2 and test the exported bytes block2 := bg.NextBlockWithTxid( @@ -351,7 +358,7 @@ func TestExportUniqueTxIDsErrorCases(t *testing.T) { defer os.RemoveAll(testSnapshotDir) // error during data file creation - dataFilePath := path.Join(testSnapshotDir, snapshotDataFileName) + dataFilePath := filepath.Join(testSnapshotDir, snapshotDataFileName) _, err := os.Create(dataFilePath) require.NoError(t, err) _, err = blkfileMgrWrapper.blockfileMgr.index.exportUniqueTxIDs(testSnapshotDir, testNewHashFunc) @@ -361,7 +368,7 @@ func TestExportUniqueTxIDsErrorCases(t *testing.T) { // error during metadata file creation fmt.Printf("testSnapshotDir=%s", testSnapshotDir) require.NoError(t, os.MkdirAll(testSnapshotDir, 0700)) - metadataFilePath := path.Join(testSnapshotDir, snapshotMetadataFileName) + metadataFilePath := filepath.Join(testSnapshotDir, snapshotMetadataFileName) _, err = os.Create(metadataFilePath) require.NoError(t, err) _, err = blkfileMgrWrapper.blockfileMgr.index.exportUniqueTxIDs(testSnapshotDir, testNewHashFunc) @@ -388,13 +395,13 @@ func verifyExportedTxIDs(t *testing.T, dir string, fileHashes map[string][]byte, require.Contains(t, fileHashes, snapshotDataFileName) require.Contains(t, fileHashes, snapshotMetadataFileName) - dataFile := path.Join(dir, snapshotDataFileName) + dataFile := filepath.Join(dir, snapshotDataFileName) dataFileContent, err := ioutil.ReadFile(dataFile) require.NoError(t, err) dataFileHash := sha256.Sum256(dataFileContent) require.Equal(t, dataFileHash[:], fileHashes[snapshotDataFileName]) - metadataFile := path.Join(dir, snapshotMetadataFileName) + metadataFile := filepath.Join(dir, snapshotMetadataFileName) metadataFileContent, err := ioutil.ReadFile(metadataFile) require.NoError(t, err) metadataFileHash := sha256.Sum256(metadataFileContent) diff --git a/common/ledger/snapshot/file.go b/common/ledger/snapshot/file.go index 4cd16a3f4fc..322e2398961 100644 --- a/common/ledger/snapshot/file.go +++ b/common/ledger/snapshot/file.go @@ -98,6 +98,9 @@ func (c *FileWriter) Done() ([]byte, error) { if err := c.bufWriter.Flush(); err != nil { return nil, errors.Wrapf(err, "error while flushing to the snapshot file: %s ", c.file.Name()) } + if err := c.file.Sync(); err != nil { + return nil, err + } if err := c.file.Close(); err != nil { return nil, errors.Wrapf(err, "error while closing the snapshot file: %s ", c.file.Name()) } diff --git a/common/viperutil/config_test.go b/common/viperutil/config_test.go index 1deea18638f..3cad53c2b13 100644 --- a/common/viperutil/config_test.go +++ b/common/viperutil/config_test.go @@ -16,19 +16,20 @@ import ( "testing" "github.com/Shopify/sarama" + "github.com/hyperledger/fabric/bccsp/factory" "github.com/hyperledger/fabric/orderer/mocks/util" "github.com/spf13/viper" ) const Prefix = "VIPERUTIL" -type testSlice struct { - Inner struct { - Slice []string +func TestEnvSlice(t *testing.T) { + type testSlice struct { + Inner struct { + Slice []string + } } -} -func TestEnvSlice(t *testing.T) { envVar := "VIPERUTIL_INNER_SLICE" envVal := "[a, b, c]" os.Setenv(envVar, envVal) @@ -49,9 +50,7 @@ func TestEnvSlice(t *testing.T) { } var uconf testSlice - - err = EnhancedExactUnmarshal(config, &uconf) - if err != nil { + if err := EnhancedExactUnmarshal(config, &uconf); err != nil { t.Fatalf("Failed to unmarshal with: %s", err) } @@ -62,7 +61,6 @@ func TestEnvSlice(t *testing.T) { } func TestKafkaVersionDecode(t *testing.T) { - type testKafkaVersion struct { Inner struct { Version sarama.KafkaVersion @@ -405,7 +403,6 @@ func TestStringFromFileEnv(t *testing.T) { }{ {"Override", "---\nInner:\n Single:\n File: wrong_file"}, {"NoFileElement", "---\nInner:\n Single:\n"}, - // {"NoElementAtAll", "---\nInner:\n"}, test case for another time } for _, tc := range testCases { @@ -439,7 +436,6 @@ func TestStringFromFileEnv(t *testing.T) { } }) } - } func TestDecodeOpaqueField(t *testing.T) { @@ -458,10 +454,49 @@ Hello: Hello struct{ World int } } if err := EnhancedExactUnmarshal(config, &conf); err != nil { - t.Fatalf("Error unmashalling: %s", err) + t.Fatalf("Error unmarshalling: %s", err) } if conf.Foo != "bar" || conf.Hello.World != 42 { t.Fatalf("Incorrect decoding") } } + +func TestBCCSPDecodeHookOverride(t *testing.T) { + type testConfig struct { + BCCSP *factory.FactoryOpts + } + yaml := ` +BCCSP: + Default: default-provider + SW: + Security: 999 +` + + config := viper.New() + config.SetEnvPrefix("VIPERUTIL") + config.AutomaticEnv() + replacer := strings.NewReplacer(".", "_") + config.SetEnvKeyReplacer(replacer) + config.SetConfigType("yaml") + + overrideVar := "VIPERUTIL_BCCSP_SW_SECURITY" + os.Setenv(overrideVar, "1111") + defer os.Unsetenv(overrideVar) + if err := config.ReadConfig(strings.NewReader(yaml)); err != nil { + t.Fatalf("Error reading config: %s", err) + } + + var tc testConfig + if err := EnhancedExactUnmarshal(config, &tc); err != nil { + t.Fatalf("Error unmarshaling: %s", err) + } + + if tc.BCCSP == nil || tc.BCCSP.SwOpts == nil { + t.Fatalf("expected BCCSP.SW to be non-nil: %#v", tc) + } + + if tc.BCCSP.SwOpts.SecLevel != 1111 { + t.Fatalf("expected BCCSP.SW.SecLevel to equal 1111 but was %v\n", tc.BCCSP.SwOpts.SecLevel) + } +} diff --git a/common/viperutil/config_util.go b/common/viperutil/config_util.go index 6c6c02109d0..64326fff0b0 100644 --- a/common/viperutil/config_util.go +++ b/common/viperutil/config_util.go @@ -93,6 +93,7 @@ func getKeysRecursively(base string, getKey viperGetter, nodeKeys map[string]int func unmarshalJSON(val interface{}) (map[string]string, bool) { mp := map[string]string{} + s, ok := val.(string) if !ok { logger.Debugf("Unmarshal JSON: value is not a string: %v", val) @@ -303,7 +304,7 @@ func bccspHook(f reflect.Type, t reflect.Type, data interface{}) (interface{}, e config := factory.GetDefaultOpts() - err := mapstructure.Decode(data, config) + err := mapstructure.WeakDecode(data, config) if err != nil { return nil, errors.Wrap(err, "could not decode bcssp type") } diff --git a/core/chaincode/platforms/golang/platform.go b/core/chaincode/platforms/golang/platform.go index d3812f624f1..de9f417bdc5 100644 --- a/core/chaincode/platforms/golang/platform.go +++ b/core/chaincode/platforms/golang/platform.go @@ -490,7 +490,6 @@ func distributions() []dist { // pre-populate linux architecutures dists := map[dist]bool{ {goos: "linux", goarch: "amd64"}: true, - {goos: "linux", goarch: "s390x"}: true, } // add local OS and ARCH diff --git a/core/common/privdata/membershipinfo.go b/core/common/privdata/membershipinfo.go index 12ce4bf3954..6f5aa294ae2 100644 --- a/core/common/privdata/membershipinfo.go +++ b/core/common/privdata/membershipinfo.go @@ -24,7 +24,7 @@ type MembershipProvider struct { // NewMembershipInfoProvider returns MembershipProvider func NewMembershipInfoProvider(mspID string, selfSignedData protoutil.SignedData, identityDeserializerFunc func(chainID string) msp.IdentityDeserializer) *MembershipProvider { - return &MembershipProvider{selfSignedData: selfSignedData, IdentityDeserializerFactory: identityDeserializerFunc} + return &MembershipProvider{mspID: mspID, selfSignedData: selfSignedData, IdentityDeserializerFactory: identityDeserializerFunc} } // AmMemberOf checks whether the current peer is a member of the given collection config. diff --git a/core/common/privdata/membershipinfo_test.go b/core/common/privdata/membershipinfo_test.go index fa8355070d0..6f6a3f57ffa 100644 --- a/core/common/privdata/membershipinfo_test.go +++ b/core/common/privdata/membershipinfo_test.go @@ -23,18 +23,20 @@ func TestMembershipInfoProvider(t *testing.T) { Signature: []byte{1, 2, 3}, Data: []byte{4, 5, 6}, } + emptyPeerSelfSignedData := protoutil.SignedData{} identityDeserializer := func(chainID string) msp.IdentityDeserializer { return &mockDeserializer{} } - // verify membership provider returns true - membershipProvider := NewMembershipInfoProvider(mspID, peerSelfSignedData, identityDeserializer) + // verify membership provider pass simple check returns true + membershipProvider := NewMembershipInfoProvider(mspID, emptyPeerSelfSignedData, identityDeserializer) res, err := membershipProvider.AmMemberOf("test1", getAccessPolicy([]string{"peer0", "peer1"})) assert.True(t, res) assert.Nil(t, err) - // verify membership provider returns false + // verify membership provider fall back to default access policy evaluation returns false + membershipProvider = NewMembershipInfoProvider(mspID, peerSelfSignedData, identityDeserializer) res, err = membershipProvider.AmMemberOf("test1", getAccessPolicy([]string{"peer2", "peer3"})) assert.False(t, res) assert.Nil(t, err) @@ -48,6 +50,17 @@ func TestMembershipInfoProvider(t *testing.T) { res, err = membershipProvider.AmMemberOf("test1", getBadAccessPolicy([]string{"signer0"}, 1)) assert.False(t, res) assert.Nil(t, err) + + // verify membership provider with empty mspID and fall back to default access policy evaluation returns true + membershipProvider = NewMembershipInfoProvider("", peerSelfSignedData, identityDeserializer) + res, err = membershipProvider.AmMemberOf("test1", getAccessPolicy([]string{"peer0", "peer1"})) + assert.True(t, res) + assert.Nil(t, err) + + // verify membership provider with empty mspID and fall back to default access policy evaluation returns false + res, err = membershipProvider.AmMemberOf("test1", getAccessPolicy([]string{"peer2", "peer3"})) + assert.False(t, res) + assert.Nil(t, err) } func getAccessPolicy(signers []string) *peer.CollectionPolicyConfig { diff --git a/core/handlers/library/race_test.go b/core/handlers/library/race_test.go index dcaf8421286..dd12f0eab19 100644 --- a/core/handlers/library/race_test.go +++ b/core/handlers/library/race_test.go @@ -1,6 +1,4 @@ // +build race -// +build go1.9,linux,cgo go1.10,darwin,cgo -// +build !ppc64le /* Copyright IBM Corp. All Rights Reserved. diff --git a/core/handlers/library/registry_plugin_test.go b/core/handlers/library/registry_plugin_test.go index 50f27502784..802ed440719 100644 --- a/core/handlers/library/registry_plugin_test.go +++ b/core/handlers/library/registry_plugin_test.go @@ -1,6 +1,3 @@ -// +build go1.9,linux,cgo go1.10,darwin,cgo -// +build !ppc64le - /* Copyright SecureKey Technologies Inc. All Rights Reserved. diff --git a/core/ledger/confighistory/mgr.go b/core/ledger/confighistory/mgr.go index f32c53f4ec2..11bfa831906 100644 --- a/core/ledger/confighistory/mgr.go +++ b/core/ledger/confighistory/mgr.go @@ -8,7 +8,7 @@ package confighistory import ( "fmt" - "path" + "path/filepath" "github.com/golang/protobuf/proto" "github.com/hyperledger/fabric-protos-go/common" @@ -181,23 +181,27 @@ func (r *Retriever) CollectionConfigAt(blockNum uint64, chaincodeName string) (* // extra bytes. Further, the collection config namespace is not expected to have // millions of entries. func (r *Retriever) ExportConfigHistory(dir string, newHashFunc snapshot.NewHashFunc) (map[string][]byte, error) { - dataFileWriter, err := snapshot.CreateFile(path.Join(dir, snapshotDataFileName), snapshotFileFormat, newHashFunc) - if err != nil { - return nil, err - } - defer dataFileWriter.Close() - nsItr := r.dbHandle.getNamespaceIterator(collectionConfigNamespace) if err := nsItr.Error(); err != nil { return nil, errors.Wrap(err, "internal leveldb error while obtaining db iterator") } defer nsItr.Release() + var numCollectionConfigs uint64 = 0 + var dataFileWriter *snapshot.FileWriter + var err error for nsItr.Next() { if err := nsItr.Error(); err != nil { return nil, errors.Wrap(err, "internal leveldb error while iterating for collection config history") } + if numCollectionConfigs == 0 { // first iteration, create the data file + dataFileWriter, err = snapshot.CreateFile(filepath.Join(dir, snapshotDataFileName), snapshotFileFormat, newHashFunc) + if err != nil { + return nil, err + } + defer dataFileWriter.Close() + } if err := dataFileWriter.EncodeBytes(nsItr.Key()); err != nil { return nil, err } @@ -206,12 +210,16 @@ func (r *Retriever) ExportConfigHistory(dir string, newHashFunc snapshot.NewHash } numCollectionConfigs++ } + + if dataFileWriter == nil { + return nil, nil + } + dataHash, err := dataFileWriter.Done() if err != nil { return nil, err } - - metadataFileWriter, err := snapshot.CreateFile(path.Join(dir, snapshotMetadataFileName), snapshotFileFormat, newHashFunc) + metadataFileWriter, err := snapshot.CreateFile(filepath.Join(dir, snapshotMetadataFileName), snapshotFileFormat, newHashFunc) if err != nil { return nil, err } diff --git a/core/ledger/confighistory/mgr_test.go b/core/ledger/confighistory/mgr_test.go index 214aa4df7bc..c7b46e04f92 100644 --- a/core/ledger/confighistory/mgr_test.go +++ b/core/ledger/confighistory/mgr_test.go @@ -308,9 +308,10 @@ func TestExportConfigHistory(t *testing.T) { // config history database is empty fileHashes, err := env.retriever.ExportConfigHistory(env.testSnapshotDir, testNewHashFunc) require.NoError(t, err) - verifyExportedConfigHistory(t, env.testSnapshotDir, fileHashes, nil) - os.Remove(path.Join(env.testSnapshotDir, snapshotDataFileName)) - os.Remove(path.Join(env.testSnapshotDir, snapshotMetadataFileName)) + require.Empty(t, fileHashes) + files, err := ioutil.ReadDir(env.testSnapshotDir) + require.NoError(t, err) + require.Len(t, files, 0) // config history database has 3 chaincodes each with 1 collection config entry in the // collectionConfigNamespace @@ -426,10 +427,23 @@ func verifyExportedConfigHistory(t *testing.T, dir string, fileHashes map[string func TestExportConfigHistoryErrorCase(t *testing.T) { env := newTestEnvForSnapshot(t) defer env.cleanup() + + dbHandle := env.mgr.dbProvider.getDB("ledger1") + cc1collConfigPackage := testutilCreateCollConfigPkg([]string{"Explicit-cc1-coll-1", "Explicit-cc1-coll-2"}) + batch, err := prepareDBBatch( + map[string]*peer.CollectionConfigPackage{ + "chaincode1": cc1collConfigPackage, + }, + 50, + ) + assert.NoError(t, err) + assert.NoError(t, dbHandle.writeBatch(batch, true)) + // error during data file creation dataFilePath := path.Join(env.testSnapshotDir, snapshotDataFileName) - _, err := os.Create(dataFilePath) + _, err = os.Create(dataFilePath) require.NoError(t, err) + _, err = env.retriever.ExportConfigHistory(env.testSnapshotDir, testNewHashFunc) require.Contains(t, err.Error(), "error while creating the snapshot file: "+dataFilePath) os.RemoveAll(env.testSnapshotDir) diff --git a/core/ledger/kvledger/txmgmt/privacyenabledstate/snapshot.go b/core/ledger/kvledger/txmgmt/privacyenabledstate/snapshot.go index 2575b729e2b..f14386cd412 100644 --- a/core/ledger/kvledger/txmgmt/privacyenabledstate/snapshot.go +++ b/core/ledger/kvledger/txmgmt/privacyenabledstate/snapshot.go @@ -8,7 +8,7 @@ package privacyenabledstate import ( "hash" - "path" + "path/filepath" "github.com/hyperledger/fabric/common/ledger/snapshot" "github.com/hyperledger/fabric/core/ledger/kvledger/txmgmt/statedb" @@ -33,28 +33,8 @@ func (s *DB) ExportPubStateAndPvtStateHashes(dir string, newHashFunc snapshot.Ne } defer itr.Close() - pubStateWriter, err := newSnapshotWriter( - path.Join(dir, pubStateDataFileName), - path.Join(dir, pubStateMetadataFileName), - dbValueFormat, - newHashFunc, - ) - if err != nil { - return nil, err - } - defer pubStateWriter.close() - - pvtStateHashesWriter, err := newSnapshotWriter( - path.Join(dir, pvtStateHashesFileName), - path.Join(dir, pvtStateHashesMetadataFileName), - dbValueFormat, - newHashFunc, - ) - if err != nil { - return nil, err - } - defer pvtStateHashesWriter.close() - + var pubStateWriter *snapshotWriter + var pvtStateHashesWriter *snapshotWriter for { compositeKey, dbValue, err := itr.Next() if err != nil { @@ -65,30 +45,61 @@ func (s *DB) ExportPubStateAndPvtStateHashes(dir string, newHashFunc snapshot.Ne } switch { case isHashedDataNs(compositeKey.Namespace): + if pvtStateHashesWriter == nil { // encountered first time the pvt state hash element + pvtStateHashesWriter, err = newSnapshotWriter( + filepath.Join(dir, pvtStateHashesFileName), + filepath.Join(dir, pvtStateHashesMetadataFileName), + dbValueFormat, + newHashFunc, + ) + if err != nil { + return nil, err + } + defer pvtStateHashesWriter.close() + } if err := pvtStateHashesWriter.addData(compositeKey, dbValue); err != nil { return nil, err } default: + if pubStateWriter == nil { // encountered first time the pub state element + pubStateWriter, err = newSnapshotWriter( + filepath.Join(dir, pubStateDataFileName), + filepath.Join(dir, pubStateMetadataFileName), + dbValueFormat, + newHashFunc, + ) + if err != nil { + return nil, err + } + defer pubStateWriter.close() + } if err := pubStateWriter.addData(compositeKey, dbValue); err != nil { return nil, err } } } - pubStateDataHash, pubStateMetadataHash, err := pubStateWriter.done() - if err != nil { - return nil, err + + snapshotFilesInfo := map[string][]byte{} + + if pubStateWriter != nil { + pubStateDataHash, pubStateMetadataHash, err := pubStateWriter.done() + if err != nil { + return nil, err + } + snapshotFilesInfo[pubStateDataFileName] = pubStateDataHash + snapshotFilesInfo[pubStateMetadataFileName] = pubStateMetadataHash } - pvtStateHahshesDataHash, pvtStateHashesMetadataHash, err := pvtStateHashesWriter.done() - if err != nil { - return nil, err + + if pvtStateHashesWriter != nil { + pvtStateHahshesDataHash, pvtStateHashesMetadataHash, err := pvtStateHashesWriter.done() + if err != nil { + return nil, err + } + snapshotFilesInfo[pvtStateHashesFileName] = pvtStateHahshesDataHash + snapshotFilesInfo[pvtStateHashesMetadataFileName] = pvtStateHashesMetadataHash } - return map[string][]byte{ - pubStateDataFileName: pubStateDataHash, - pubStateMetadataFileName: pubStateMetadataHash, - pvtStateHashesFileName: pvtStateHahshesDataHash, - pvtStateHashesMetadataFileName: pvtStateHashesMetadataHash, - }, - nil + + return snapshotFilesInfo, nil } // snapshotWriter generates two files, a data file and a metadata file. The datafile contains a series of tuples diff --git a/core/ledger/kvledger/txmgmt/privacyenabledstate/snapshot_test.go b/core/ledger/kvledger/txmgmt/privacyenabledstate/snapshot_test.go index 52b5fb0d0cb..78719a84434 100644 --- a/core/ledger/kvledger/txmgmt/privacyenabledstate/snapshot_test.go +++ b/core/ledger/kvledger/txmgmt/privacyenabledstate/snapshot_test.go @@ -12,7 +12,7 @@ import ( "hash" "io/ioutil" "os" - "path" + "path/filepath" "strings" "testing" @@ -84,7 +84,9 @@ func testSanpshot(t *testing.T, env TestEnv) { derivePvtDataNs("ns3", "coll1"), ) + testSnapshotWithSampleData(t, env, nil, nil, nil) // no data testSnapshotWithSampleData(t, env, samplePublicState, nil, nil) // test with only public data + testSnapshotWithSampleData(t, env, nil, samplePvtStateHashes, nil) // test with only pvtdata hashes testSnapshotWithSampleData(t, env, samplePublicState, samplePvtStateHashes, nil) // test with public data and pvtdata hashes testSnapshotWithSampleData(t, env, samplePublicState, samplePvtStateHashes, samplePvtState) // test with public data, pvtdata hashes, and pvt data } @@ -127,32 +129,41 @@ func testSnapshotWithSampleData(t *testing.T, env TestEnv, filesAndHashes, err := db.ExportPubStateAndPvtStateHashes(snapshotDir, testNewHashFunc) require.NoError(t, err) - require.Len(t, filesAndHashes, 4) - require.Contains(t, filesAndHashes, pubStateDataFileName) - require.Contains(t, filesAndHashes, pubStateMetadataFileName) - require.Contains(t, filesAndHashes, pvtStateHashesFileName) - require.Contains(t, filesAndHashes, pvtStateHashesMetadataFileName) for f, h := range filesAndHashes { - expectedFile := path.Join(snapshotDir, f) + expectedFile := filepath.Join(snapshotDir, f) require.FileExists(t, expectedFile) require.Equal(t, sha256ForFileForTest(t, expectedFile), h) } - // verify snapshot files contents - pubStateFromSnapshot := loadSnapshotDataForTest(t, - env, - path.Join(snapshotDir, pubStateDataFileName), - path.Join(snapshotDir, pubStateMetadataFileName), - ) + numFilesExpected := 0 + if len(publicState) != 0 { + numFilesExpected += 2 + require.Contains(t, filesAndHashes, pubStateDataFileName) + require.Contains(t, filesAndHashes, pubStateMetadataFileName) + // verify snapshot files contents + pubStateFromSnapshot := loadSnapshotDataForTest(t, + env, + filepath.Join(snapshotDir, pubStateDataFileName), + filepath.Join(snapshotDir, pubStateMetadataFileName), + ) + require.Equal(t, publicState, pubStateFromSnapshot) + } - pvtStateHashesFromSnapshot := loadSnapshotDataForTest(t, - env, - path.Join(snapshotDir, pvtStateHashesFileName), - path.Join(snapshotDir, pvtStateHashesMetadataFileName), - ) - require.Equal(t, publicState, pubStateFromSnapshot) - require.Equal(t, pvtStateHashes, pvtStateHashesFromSnapshot) + if len(pvtStateHashes) != 0 { + numFilesExpected += 2 + require.Contains(t, filesAndHashes, pvtStateHashesFileName) + require.Contains(t, filesAndHashes, pvtStateHashesMetadataFileName) + // verify snapshot files contents + pvtStateHashesFromSnapshot := loadSnapshotDataForTest(t, + env, + filepath.Join(snapshotDir, pvtStateHashesFileName), + filepath.Join(snapshotDir, pvtStateHashesMetadataFileName), + ) + + require.Equal(t, pvtStateHashes, pvtStateHashesFromSnapshot) + } + require.Len(t, filesAndHashes, numFilesExpected) } func sha256ForFileForTest(t *testing.T, file string) []byte { @@ -217,6 +228,7 @@ func TestSnapshotErrorPropagation(t *testing.T) { db = dbEnv.GetDBHandle(generateLedgerID(t)) updateBatch := NewUpdateBatch() updateBatch.PubUpdates.Put("ns1", "key1", []byte("value1"), version.NewHeight(1, 1)) + updateBatch.HashUpdates.Put("ns1", "coll1", []byte("key1"), []byte("value1"), version.NewHeight(1, 1)) db.ApplyPrivacyAwareUpdates(updateBatch, version.NewHeight(1, 1)) snapshotDir, err = ioutil.TempDir("", "testsnapshot") require.NoError(t, err) @@ -234,7 +246,7 @@ func TestSnapshotErrorPropagation(t *testing.T) { // pubStateDataFile already exists init() defer cleanup() - pubStateDataFilePath := path.Join(snapshotDir, pubStateDataFileName) + pubStateDataFilePath := filepath.Join(snapshotDir, pubStateDataFileName) _, err = os.Create(pubStateDataFilePath) require.NoError(t, err) _, err = db.ExportPubStateAndPvtStateHashes(snapshotDir, testNewHashFunc) @@ -242,7 +254,7 @@ func TestSnapshotErrorPropagation(t *testing.T) { // pubStateMetadataFile already exists reinit() - pubStateMetadataFilePath := path.Join(snapshotDir, pubStateMetadataFileName) + pubStateMetadataFilePath := filepath.Join(snapshotDir, pubStateMetadataFileName) _, err = os.Create(pubStateMetadataFilePath) require.NoError(t, err) _, err = db.ExportPubStateAndPvtStateHashes(snapshotDir, testNewHashFunc) @@ -250,7 +262,7 @@ func TestSnapshotErrorPropagation(t *testing.T) { // pvtStateHashesDataFile already exists reinit() - pvtStateHashesDataFilePath := path.Join(snapshotDir, pvtStateHashesFileName) + pvtStateHashesDataFilePath := filepath.Join(snapshotDir, pvtStateHashesFileName) _, err = os.Create(pvtStateHashesDataFilePath) require.NoError(t, err) _, err = db.ExportPubStateAndPvtStateHashes(snapshotDir, testNewHashFunc) @@ -258,7 +270,7 @@ func TestSnapshotErrorPropagation(t *testing.T) { // pvtStateHashesMetadataFile already exists reinit() - pvtStateHashesMetadataFilePath := path.Join(snapshotDir, pvtStateHashesMetadataFileName) + pvtStateHashesMetadataFilePath := filepath.Join(snapshotDir, pvtStateHashesMetadataFileName) _, err = os.Create(pvtStateHashesMetadataFilePath) require.NoError(t, err) _, err = db.ExportPubStateAndPvtStateHashes(snapshotDir, testNewHashFunc) diff --git a/core/ledger/kvledger/txmgmt/statedb/statecouchdb/statecouchdb.go b/core/ledger/kvledger/txmgmt/statedb/statecouchdb/statecouchdb.go index a41ea6214f9..740a51b0de6 100644 --- a/core/ledger/kvledger/txmgmt/statedb/statecouchdb/statecouchdb.go +++ b/core/ledger/kvledger/txmgmt/statedb/statecouchdb/statecouchdb.go @@ -520,8 +520,14 @@ func (scanner *queryScanner) getNextStateRangeScanResults() error { return err } scanner.resultsInfo.results = queryResult - scanner.queryDefinition.startKey = nextStartKey scanner.paginationInfo.cursor = 0 + if scanner.queryDefinition.endKey == nextStartKey { + // as we always set inclusive_end=false to match the behavior of + // goleveldb iterator, it is safe to mark the scanner as exhausted + scanner.exhausted = true + // we still need to update the startKey as it is returned as bookmark + } + scanner.queryDefinition.startKey = nextStartKey return nil } @@ -877,6 +883,7 @@ type queryScanner struct { queryDefinition *queryDefinition paginationInfo *paginationInfo resultsInfo *resultsInfo + exhausted bool } type queryDefinition struct { @@ -899,7 +906,7 @@ type resultsInfo struct { func newQueryScanner(namespace string, db *couchDatabase, query string, internalQueryLimit, limit int32, bookmark, startKey, endKey string) (*queryScanner, error) { - scanner := &queryScanner{namespace, db, &queryDefinition{startKey, endKey, query, internalQueryLimit}, &paginationInfo{-1, limit, bookmark}, &resultsInfo{0, nil}} + scanner := &queryScanner{namespace, db, &queryDefinition{startKey, endKey, query, internalQueryLimit}, &paginationInfo{-1, limit, bookmark}, &resultsInfo{0, nil}, false} var err error // query is defined, then execute the query and return the records and bookmark if scanner.queryDefinition.query != "" { @@ -924,6 +931,9 @@ func (scanner *queryScanner) Next() (statedb.QueryResult, error) { // check to see if additional records are needed // requery if the cursor exceeds the internalQueryLimit if scanner.paginationInfo.cursor >= scanner.queryDefinition.internalQueryLimit { + if scanner.exhausted { + return nil, nil + } var err error // query is defined, then execute the query and return the records and bookmark if scanner.queryDefinition.query != "" { diff --git a/core/ledger/kvledger/txmgmt/statedb/statecouchdb/statecouchdb_test.go b/core/ledger/kvledger/txmgmt/statedb/statecouchdb/statecouchdb_test.go index 0ba9d6cb23d..57ebb9f7ad2 100644 --- a/core/ledger/kvledger/txmgmt/statedb/statecouchdb/statecouchdb_test.go +++ b/core/ledger/kvledger/txmgmt/statedb/statecouchdb/statecouchdb_test.go @@ -13,6 +13,7 @@ import ( "strings" "testing" "time" + "unicode/utf8" "github.com/hyperledger/fabric/common/flogging" "github.com/hyperledger/fabric/common/ledger/dataformat" @@ -1477,3 +1478,144 @@ func TestChannelMetadata_NegativeTests(t *testing.T) { require.Equal(t, expectedChannelMetadata, savedChannelMetadata) require.Equal(t, expectedChannelMetadata, vdb.channelMetadata) } + +func TestRangeQueryWithInternalLimitAndPageSize(t *testing.T) { + // generateSampleData returns a slice of KVs. The returned value contains 12 KVs for a namespace ns1 + generateSampleData := func() []*statedb.VersionedKV { + sampleData := []*statedb.VersionedKV{} + ver := version.NewHeight(1, 1) + sampleKV := &statedb.VersionedKV{ + CompositeKey: statedb.CompositeKey{Namespace: "ns1", Key: string('\u0000')}, + VersionedValue: statedb.VersionedValue{Value: []byte("v0"), Version: ver, Metadata: []byte("m0")}, + } + sampleData = append(sampleData, sampleKV) + for i := 0; i < 10; i++ { + sampleKV = &statedb.VersionedKV{ + CompositeKey: statedb.CompositeKey{ + Namespace: "ns1", + Key: fmt.Sprintf("key-%d", i), + }, + VersionedValue: statedb.VersionedValue{ + Value: []byte(fmt.Sprintf("value-for-key-%d-for-ns1", i)), + Version: ver, + Metadata: []byte(fmt.Sprintf("metadata-for-key-%d-for-ns1", i)), + }, + } + sampleData = append(sampleData, sampleKV) + } + sampleKV = &statedb.VersionedKV{ + CompositeKey: statedb.CompositeKey{Namespace: "ns1", Key: string(utf8.MaxRune)}, + VersionedValue: statedb.VersionedValue{Value: []byte("v1"), Version: ver, Metadata: []byte("m1")}, + } + sampleData = append(sampleData, sampleKV) + return sampleData + } + + vdbEnv.init(t, nil) + defer vdbEnv.cleanup() + channelName := "ch1" + vdb, err := vdbEnv.DBProvider.GetDBHandle(channelName) + require.NoError(t, err) + db := vdb.(*VersionedDB) + + sampleData := generateSampleData() + batch := statedb.NewUpdateBatch() + for _, d := range sampleData { + batch.PutValAndMetadata(d.Namespace, d.Key, d.Value, d.Metadata, d.Version) + } + db.ApplyUpdates(batch, version.NewHeight(1, 1)) + + defaultLimit := vdbEnv.config.InternalQueryLimit + + // Scenario 1: We try to fetch either 11 records or all 12 records. We pass various internalQueryLimits. + // key utf8.MaxRune would not be included as inclusive_end is always set to false + testRangeQueryWithInternalLimit(t, "ns1", db, 2, string('\u0000'), string(utf8.MaxRune), sampleData[:len(sampleData)-1]) + testRangeQueryWithInternalLimit(t, "ns1", db, 5, string('\u0000'), string(utf8.MaxRune), sampleData[:len(sampleData)-1]) + testRangeQueryWithInternalLimit(t, "ns1", db, 2, string('\u0000'), "", sampleData) + testRangeQueryWithInternalLimit(t, "ns1", db, 5, string('\u0000'), "", sampleData) + testRangeQueryWithInternalLimit(t, "ns1", db, 2, "", string(utf8.MaxRune), sampleData[:len(sampleData)-1]) + testRangeQueryWithInternalLimit(t, "ns1", db, 5, "", string(utf8.MaxRune), sampleData[:len(sampleData)-1]) + testRangeQueryWithInternalLimit(t, "ns1", db, 2, "", "", sampleData) + testRangeQueryWithInternalLimit(t, "ns1", db, 5, "", "", sampleData) + + // Scenario 2: We try to fetch either 11 records or all 12 records using pagination. We pass various page sizes while + // keeping the internalQueryLimit as the default one, i.e., 1000. + vdbEnv.config.InternalQueryLimit = defaultLimit + testRangeQueryWithPageSize(t, "ns1", db, 2, string('\u0000'), string(utf8.MaxRune), sampleData[:len(sampleData)-1]) + testRangeQueryWithPageSize(t, "ns1", db, 15, string('\u0000'), string(utf8.MaxRune), sampleData[:len(sampleData)-1]) + testRangeQueryWithPageSize(t, "ns1", db, 2, string('\u0000'), "", sampleData) + testRangeQueryWithPageSize(t, "ns1", db, 15, string('\u0000'), "", sampleData) + testRangeQueryWithPageSize(t, "ns1", db, 2, "", string(utf8.MaxRune), sampleData[:len(sampleData)-1]) + testRangeQueryWithPageSize(t, "ns1", db, 15, "", string(utf8.MaxRune), sampleData[:len(sampleData)-1]) + testRangeQueryWithPageSize(t, "ns1", db, 2, "", "", sampleData) + testRangeQueryWithPageSize(t, "ns1", db, 15, "", "", sampleData) + + // Scenario 3: We try to fetch either 11 records or all 12 records using pagination. We pass various page sizes while + // keeping the internalQueryLimit to 1. + vdbEnv.config.InternalQueryLimit = 1 + testRangeQueryWithPageSize(t, "ns1", db, 2, string('\u0000'), string(utf8.MaxRune), sampleData[:len(sampleData)-1]) + testRangeQueryWithPageSize(t, "ns1", db, 15, string('\u0000'), string(utf8.MaxRune), sampleData[:len(sampleData)-1]) + testRangeQueryWithPageSize(t, "ns1", db, 2, string('\u0000'), "", sampleData) + testRangeQueryWithPageSize(t, "ns1", db, 15, string('\u0000'), "", sampleData) + testRangeQueryWithPageSize(t, "ns1", db, 2, "", string(utf8.MaxRune), sampleData[:len(sampleData)-1]) + testRangeQueryWithPageSize(t, "ns1", db, 15, "", string(utf8.MaxRune), sampleData[:len(sampleData)-1]) + testRangeQueryWithPageSize(t, "ns1", db, 2, "", "", sampleData) + testRangeQueryWithPageSize(t, "ns1", db, 15, "", "", sampleData) +} + +func testRangeQueryWithInternalLimit( + t *testing.T, + ns string, + db *VersionedDB, + limit int, + startKey, endKey string, + expectedResults []*statedb.VersionedKV, +) { + vdbEnv.config.InternalQueryLimit = limit + require.Equal(t, int32(limit), db.couchInstance.internalQueryLimit()) + itr, err := db.GetStateRangeScanIterator(ns, startKey, endKey) + require.NoError(t, err) + require.Equal(t, int32(limit), itr.(*queryScanner).queryDefinition.internalQueryLimit) + results := []*statedb.VersionedKV{} + for { + result, err := itr.Next() + require.NoError(t, err) + if result == nil { + itr.Close() + break + } + kv := result.(*statedb.VersionedKV) + results = append(results, kv) + } + require.Equal(t, expectedResults, results) +} + +func testRangeQueryWithPageSize( + t *testing.T, + ns string, + db *VersionedDB, + pageSize int, + startKey, endKey string, + expectedResults []*statedb.VersionedKV, +) { + itr, err := db.GetStateRangeScanIteratorWithPagination(ns, startKey, endKey, int32(pageSize)) + require.NoError(t, err) + results := []*statedb.VersionedKV{} + for { + result, err := itr.Next() + require.NoError(t, err) + if result != nil { + kv := result.(*statedb.VersionedKV) + results = append(results, kv) + continue + } + nextStartKey := itr.GetBookmarkAndClose() + if nextStartKey == endKey { + break + } + itr, err = db.GetStateRangeScanIteratorWithPagination(ns, nextStartKey, endKey, int32(pageSize)) + require.NoError(t, err) + continue + } + require.Equal(t, expectedResults, results) +} diff --git a/docs/source/hsm.md b/docs/source/hsm.md index a7cec88c159..5a11d3dcff5 100644 --- a/docs/source/hsm.md +++ b/docs/source/hsm.md @@ -53,7 +53,7 @@ bccsp: default: PKCS11 pkcs11: Library: /etc/hyperledger/fabric/libsofthsm2.so - Pin: 71811222 + Pin: "71811222" Label: fabric hash: SHA2 security: 256 @@ -134,7 +134,7 @@ You can set up a Fabric CA to use an HSM by making the same edits to the CA serv default: PKCS11 pkcs11: Library: /etc/hyperledger/fabric/libsofthsm2.so - Pin: 71811222 + Pin: "71811222" Label: fabric hash: SHA2 security: 256 diff --git a/docs/source/install.rst b/docs/source/install.rst index 2a0dc98ea66..5f7c7269ce3 100644 --- a/docs/source/install.rst +++ b/docs/source/install.rst @@ -48,12 +48,12 @@ the binaries and images. .. note:: If you want a specific release, pass a version identifier for Fabric, Fabric-ca and thirdparty Docker images. The command below demonstrates how to download the latest production releases - - **Fabric v2.1.0** and **Fabric CA v1.4.7** + **Fabric v2.1.1** and **Fabric CA v1.4.7** .. code:: bash curl -sSL https://bit.ly/2ysbOFE | bash -s -- - curl -sSL https://bit.ly/2ysbOFE | bash -s -- 2.1.0 1.4.7 0.4.20 + curl -sSL https://bit.ly/2ysbOFE | bash -s -- 2.1.1 1.4.7 0.4.20 .. note:: If you get an error running the above curl command, you may have too old a version of curl that does not handle diff --git a/docs/source/whatsnew.rst b/docs/source/whatsnew.rst index 3218564ca8e..5215d174394 100644 --- a/docs/source/whatsnew.rst +++ b/docs/source/whatsnew.rst @@ -215,6 +215,7 @@ announced with the new Fabric v2.0 release, and the changes introduced in v2.1. * `Fabric v2.0.0 release notes `_. * `Fabric v2.0.1 release notes `_. * `Fabric v2.1.0 release notes `_. +* `Fabric v2.1.1 release notes `_. .. Licensed under Creative Commons Attribution 4.0 International License https://creativecommons.org/licenses/by/4.0/ diff --git a/gossip/privdata/coordinator.go b/gossip/privdata/coordinator.go index f6dd6d82392..f1fad875ab7 100644 --- a/gossip/privdata/coordinator.go +++ b/gossip/privdata/coordinator.go @@ -432,7 +432,7 @@ func getTxInfoFromTransactionBytes(envBytes []byte) (*txInfo, error) { if chdr.Type != int32(common.HeaderType_ENDORSER_TRANSACTION) { err := errors.New("header type is not an endorser transaction") - logger.Warningf("Invalid transaction type: %s", err) + logger.Debugf("Invalid transaction type: %s", err) return nil, err } diff --git a/integration/raft/config_test.go b/integration/raft/config_test.go index f79ebaa5e48..5e9c47958c5 100644 --- a/integration/raft/config_test.go +++ b/integration/raft/config_test.go @@ -11,6 +11,7 @@ import ( "fmt" "io/ioutil" "os" + "path" "path/filepath" "strings" "syscall" @@ -1107,16 +1108,9 @@ var _ = Describe("EndToEnd reconfiguration and onboarding", func() { }, []*nwo.Orderer{o2, o3}, peer, network) By("Removing the first orderer from an application channel") - extendNetwork(network) - certificatesOfOrderers := refreshOrdererPEMs(network) - removeConsenter(network, peer, o2, "testchannel", certificatesOfOrderers[0].oldCert) - - certPath := certificatesOfOrderers[0].dstFile - keyFile := strings.Replace(certPath, "server.crt", "server.key", -1) - err := ioutil.WriteFile(certPath, certificatesOfOrderers[0].oldCert, 0644) - Expect(err).To(Not(HaveOccurred())) - err = ioutil.WriteFile(keyFile, certificatesOfOrderers[0].oldKey, 0644) - Expect(err).To(Not(HaveOccurred())) + o1cert, err := ioutil.ReadFile(path.Join(network.OrdererLocalTLSDir(o1), "server.crt")) + Expect(err).ToNot(HaveOccurred()) + removeConsenter(network, peer, o2, "testchannel", o1cert) By("Starting the orderer again") ordererRunner := network.OrdererRunner(orderers[0]) @@ -1154,8 +1148,8 @@ var _ = Describe("EndToEnd reconfiguration and onboarding", func() { By("Adding the evicted orderer back to the application channel") addConsenter(network, peer, o2, "testchannel", etcdraft.Consenter{ - ServerTlsCert: certificatesOfOrderers[0].oldCert, - ClientTlsCert: certificatesOfOrderers[0].oldCert, + ServerTlsCert: o1cert, + ClientTlsCert: o1cert, Host: "127.0.0.1", Port: uint32(network.OrdererPort(orderers[0], nwo.ClusterPort)), }) diff --git a/internal/peer/common/common.go b/internal/peer/common/common.go index b7d0aada6a2..872a495a565 100644 --- a/internal/peer/common/common.go +++ b/internal/peer/common/common.go @@ -135,7 +135,7 @@ func InitCrypto(mspMgrConfigDir, localMSPID, localMSPType string) error { SetBCCSPKeystorePath() bccspConfig := factory.GetDefaultOpts() if config := viper.Get("peer.BCCSP"); config != nil { - err = mapstructure.Decode(config, bccspConfig) + err = mapstructure.WeakDecode(config, bccspConfig) if err != nil { return errors.WithMessage(err, "could not decode peer BCCSP configuration") } diff --git a/sampleconfig/configtx.yaml b/sampleconfig/configtx.yaml index 4ac1dcf81e4..71edeb288b7 100644 --- a/sampleconfig/configtx.yaml +++ b/sampleconfig/configtx.yaml @@ -485,8 +485,8 @@ Profiles: SampleInsecureKafka: <<: *ChannelDefaults Orderer: - OrdererType: kafka <<: *OrdererDefaults + OrdererType: kafka Consortiums: SampleConsortium: Organizations: diff --git a/scripts/bootstrap.sh b/scripts/bootstrap.sh index a39172b9ab6..4b8e5b00d93 100755 --- a/scripts/bootstrap.sh +++ b/scripts/bootstrap.sh @@ -6,7 +6,7 @@ # # if version not passed in, default to latest released version -VERSION=2.1.0 +VERSION=2.1.1 # if ca version not passed in, default to latest released version CA_VERSION=1.4.7 # current version of thirdparty images (couchdb, kafka and zookeeper) released @@ -23,8 +23,8 @@ printHelp() { echo "-s : bypass fabric-samples repo clone" echo "-b : bypass download of platform-specific binaries" echo - echo "e.g. bootstrap.sh 2.1.0 1.4.7 0.4.18 -s" - echo "would download docker images and binaries for Fabric v2.1.0 and Fabric CA v1.4.7" + echo "e.g. bootstrap.sh 2.1.1 1.4.7 0.4.20 -s" + echo "would download docker images and binaries for Fabric v2.1.1 and Fabric CA v1.4.7" } # dockerPull() pulls docker images from fabric and chaincode repositories diff --git a/scripts/run-unit-tests.sh b/scripts/run-unit-tests.sh index 2dba8795626..2f2f3eb1a4d 100755 --- a/scripts/run-unit-tests.sh +++ b/scripts/run-unit-tests.sh @@ -171,11 +171,6 @@ run_tests_with_coverage() { } main() { - # explicit exclusions for ppc and s390x - if [ "$(uname -m)" == "ppc64le" ] || [ "$(uname -m)" == "s390x" ]; then - excluded_packages+=("github.com/hyperledger/fabric/core/chaincode/platforms/java") - fi - # default behavior is to run all tests local -a package_spec=("${TEST_PKGS:-github.com/hyperledger/fabric/...}")