Skip to content

Commit

Permalink
[ASHF] Syncing Fork (#29)
Browse files Browse the repository at this point in the history
* Address more concerns highlighted by linters

These changes remove dead code, add error checks, and use assign unused
variables to the unnamed variable `_`.

Signed-off-by: Matthew Sykes <sykesmat@us.ibm.com>

* Fixed write_first_app.rst typo

Signed-off-by: pratikpatil024 <pratikspatil024@gmail.com>

* FAB-17840 Ch.Part.API: Join channel REST handler (hyperledger#1305)

* FAB-17840 Ch.Part.API: Join channel REST handler

Implement a handler for a POST request to join a channel.

Here we support requests carrying application/json, support
for additional Content-Type will be added later.

Signed-off-by: Yoav Tock <tock@il.ibm.com>
Change-Id: I8d09e3cba09842f2adc47fb60161eee814e33d31

* Review: simplify code

Signed-off-by: Yoav Tock <tock@il.ibm.com>
Change-Id: Iee1e8b66cb77f64b9762dee8f85304958081e0fe

* Review comments: simplify code

- extract error handling to separate method
- extract channelID extraction to separate method, reuse
- test Accept header allowed range
- better comments

Signed-off-by: Yoav Tock <tock@il.ibm.com>
Change-Id: I11639a3f159694019e521e180e7fd5fadb42cb4f

* Review comments: support for multipart/form-data

Signed-off-by: Yoav Tock <tock@il.ibm.com>
Change-Id: Ic5a0307f56be5561f45910a02892dc7e7b9554d1

* Review comments: remove support for application/json

Signed-off-by: Yoav Tock <tock@il.ibm.com>
Change-Id: I38bb3564a0e6482b7bf9dca25bd8424e6db2ac95

* Spelling: s/chainocde/chaincode/g

Signed-off-by: Matthew Sykes <matthew.sykes@gmail.com>

* Retire dormant Fabric maintainers

Retire maintainers that have been inactive for the last 3 months:
Jonathan Levi
Srinivasan Muralidharan

Note:
"A maintainer removed for inactivity should be restored following a
sustained resumption of contributions and reviews (a month or more)
demonstrating a renewed commitment to the project."

Signed-off-by: David Enyeart <enyeart@us.ibm.com>

* add NOTICE file

recommended per ASF and LF policy

Signed-off-by: Brett Logan <brett.t.logan@ibm.com>

* Add NOTICE to license ignore list

Signed-off-by: Brett Logan <brett.t.logan@ibm.com>

* Fix some typo in docs

This CR fixes the following some typo in docs.
- chainocode -> chaincode
- chainode -> chaincode
- lifecyle -> lifecycle
- Hyperlegder -> Hyperledger
- chanel -> channel
- scructured -> structured
- demostrate -> demonstrate
- certficates -> certificates
- how how -> how
- the the -> the
- a a -> a
- thefollowing -> the following

Signed-off-by: Nao Nishijima <nao.nishijima.xt@hitachi.com>

* Remove txmgr interface

This commit removes the interface TxMgr, as there is a single
implementation (i.e., LockbasedTxmgr). All the dependent packages
are able to use this single implementation directly.
The only exception is the validation package that causes a circular
dependency. the validation package uses only a single function from this
interface (NewTxSimulator). The validation package uses this function
for getting accessing to the TxSimulator, in order to allow for the
execution a post-order transaction (only channel config transaction in
the current code). For meeting this need, this commit now defines a
local interface with a single function in the validation package itself.

Signed-off-by: manish <manish.sethi@gmail.com>

* FAB-17841: Ch.Part.API: Remove channel REST handler (hyperledger#1330)

Handle DELETE request to remove a channel.

- Resolve the removeStorage flag from config & query.
- If system-channel exists - reject with StatusMethodNotAllowed and
  Allow header set to GET.
- If channel does not exist - reject with StatusNotFound.
- If successs - respond with StatusNoContent.

Signed-off-by: Yoav Tock <tock@il.ibm.com>
Change-Id: I78581df3c7f0cb99007edddc83eb7a6dca5eba07

* Update commercial paper doc to use enrollUser.js

Signed-off-by: NIKHIL E GUPTA <negupta@us.ibm.com>

* Use protolator from fabric-config

Signed-off-by: Matthew Sykes <matthew.sykes@gmail.com>

* Revert "Bump viper version to the last working commit"

This reverts commit 5ad0a4f.

Signed-off-by: Brett Logan <brett.t.logan@ibm.com>

* _lifecycle ignore previous build failure during install (hyperledger#1280)

When a chaincode build previously failed while installing the
chaincode, _lifecycle should ignore the cached error and attempt
to rebuild the chaincode. This is because we should assume the
end user knows something about why the build may succeed on retry
if they're reattempting an install.

Also, update integration tests to not care about exit status of
chaincode installs (since reinstalls now error).

FAB-17907

Signed-off-by: Will Lahti <wtlahti@us.ibm.com>

* [FAB-17927] Add AllChaincodesInfo to DeployedChaincodeInfoProvider (hyperledger#1331)

Implement AllChaincodesInfo to query ledger and return chaincode info
for all the deployed chaincode (both new lifecycle and legacy chaincodes)
on a channel. This function is needed for ledger checkpointing
and deletion of channel-specific databases in statecouchdb.

Signed-off-by: Wenjian Qiao <wenjianq@gmail.com>

* [FAB-17471] Fix OrdererType key to correct line

SampleInsecureKafka profile tries to change consensus type to
kafka using OrdererType. But it was written in the line above
"<<: *OrdererDefaults". That causes the overwrite consensus
type again. This CR moves OrdererType to the correct line.
As a result, this CR can generate the correct ordererer type's
genesis block.

Signed-off-by: Nao Nishijima <nao.nishijima.xt@hitachi.com>

* [FAB-17059] Add missing mspID when init MembershipProvider.

Signed-off-by: Hongbin Mao <hello2mao@gmail.com>

* fsync and do not generate empty files in snapshots (hyperledger#1345)

This commit fixes following issues
- Perform sync on the snapshot files
- Does not generate empty files
- Use filepath instead of path package

Signed-off-by: manish <manish.sethi@gmail.com>

* fix infinite loop during full range query (hyperledger#1347)

We use a single scanner for achieving both paginated and
non-paginated range query.

We have internalQueryLimit and pageSize. For each
_all_docs?startKey="XXX"&endKey="YYY" REST API call to
CouchDB, we fetch atmost internalQueryLimit number of
records only by appending limit=internalQueryLimit.

When the requested pageSize is higher than the internalQueryLimit
or the total number of records present in the given range
is higher than the internalQueryLimit, iterator would execute the
query again once the records fetched in the first cycle is consumed
and so on. In order to do that, after an execution of the REST API in each
cycle, it updates the initially passed startKey to the nextStartKey.
If there is no nextStartKey, it is set to the endKey.

Currently, when the nextStartKey and the endKey are the same, we still
run one REST API call which is actually not needed as we always set
inclusive_end=false. However, this causes infinite loop in a particular
case. When we want to retrieve all the records, we would pass an empty
string as the startKey and the endKey. When the startKey is an empty
string, the REST API call would become _all_docs?endKey="YYY". When
both are empty, it would become _all_docs

Given that we set startKey to endKey when there is no nextStartKey and
still execute one REST API call, it gets into infinite loop by fetching
all records again and again.

We avovid this infinite loop by setting scanner.exhausted = true whe
the startKey and endKey become the same when there is no nextStartKey

Signed-off-by: senthil <cendhu@gmail.com>

* Remove unnecessary extension of osn (hyperledger#1351)

In the integration test touched by this patch, cert file
is read to remove and add node. It can be easily achieved
by reading file bytes directly, without extending network
to grab intermediate object.

Signed-off-by: Jay Guo <guojiannan1101@gmail.com>

* [FAB-17935] Change unnecessary warning log line to debug in gossip (hyperledger#1350)

privdata

Signed-off-by: Danny Cao <dcao@us.ibm.com>

* [FAB-17900] Update BCCSP.PKCS11.Pin in examples

- The Pin value must be quoted when specified in the yaml

Signed-off-by: Tiffany Harris <tiffany.harris@ibm.com>

* [FAB-17900] Fixes numeric env variable override bug

Signed-off-by: Tiffany Harris <tiffany.harris@ibm.com>

* Remove s390x, powerpc64le from RELEASE_PLATFORMS

- Release automation only creates amd64 binaries for darwin, linux, and
  windows.
- Continuous integration no longer runs on powerpc64le or s390x

Also remove stale build tags related to plugins and race detection for
old versions of go, s390x, and ppc64le.

Signed-off-by: Matthew Sykes <sykesmat@us.ibm.com>

* Backfill test for BCCSP environment overrides...

... and consistently use SW as the key for `SwOpts` in the configuration
structures. Right now tags for mapstructure and JSON do not match the
tags for YAML yet our sample configuration documents (in YAML) use `SW`.

Signed-off-by: Matthew Sykes <matthew.sykes@gmail.com>

* Updates in master for v2.1.1 release

Update master doc and bootstrap script for v2.1.1 release.

Signed-off-by: David Enyeart <enyeart@us.ibm.com>

Co-authored-by: Matthew Sykes <sykesmat@us.ibm.com>
Co-authored-by: pratikpatil024 <pratikspatil024@gmail.com>
Co-authored-by: Yoav Tock <tock@il.ibm.com>
Co-authored-by: Matthew Sykes <matthew.sykes@gmail.com>
Co-authored-by: David Enyeart <enyeart@us.ibm.com>
Co-authored-by: Christopher Ferris <chrisfer@us.ibm.com>
Co-authored-by: Brett Logan <brett.t.logan@ibm.com>
Co-authored-by: Nao Nishijima <nao.nishijima.xt@hitachi.com>
Co-authored-by: manish <manish.sethi@gmail.com>
Co-authored-by: NIKHIL E GUPTA <negupta@us.ibm.com>
Co-authored-by: Will Lahti <wtlahti@us.ibm.com>
Co-authored-by: Wenjian Qiao <wenjianq@gmail.com>
Co-authored-by: Hongbin Mao <hello2mao@gmail.com>
Co-authored-by: Senthil Nathan N <cendhu@users.noreply.github.com>
Co-authored-by: Jay Guo <guojiannan1101@gmail.com>
Co-authored-by: Danny Cao <dcao@us.ibm.com>
Co-authored-by: Tiffany Harris <tiffany.harris@ibm.com>
  • Loading branch information
18 people committed Jun 1, 2020
1 parent 245c88e commit 4193c2b
Show file tree
Hide file tree
Showing 29 changed files with 398 additions and 156 deletions.
2 changes: 1 addition & 1 deletion Makefile
Expand Up @@ -81,7 +81,7 @@ GO_TAGS ?=

RELEASE_EXES = orderer $(TOOLS_EXES)
RELEASE_IMAGES = baseos ccenv orderer peer tools
RELEASE_PLATFORMS = darwin-amd64 linux-amd64 linux-ppc64le linux-s390x windows-amd64
RELEASE_PLATFORMS = darwin-amd64 linux-amd64 windows-amd64
TOOLS_EXES = configtxgen configtxlator cryptogen discover idemixgen peer

pkgmap.configtxgen := $(PKGNAME)/cmd/configtxgen
Expand Down
2 changes: 1 addition & 1 deletion bccsp/factory/nopkcs11.go
Expand Up @@ -18,7 +18,7 @@ const pkcs11Enabled = false
// FactoryOpts holds configuration information used to initialize factory implementations
type FactoryOpts struct {
ProviderName string `mapstructure:"default" json:"default" yaml:"Default"`
SwOpts *SwOpts `mapstructure:"SW,omitempty" json:"SW,omitempty" yaml:"SwOpts"`
SwOpts *SwOpts `mapstructure:"SW,omitempty" json:"SW,omitempty" yaml:"SW,omitempty"`
}

// InitFactories must be called before using factory interfaces
Expand Down
2 changes: 1 addition & 1 deletion bccsp/factory/pkcs11.go
Expand Up @@ -19,7 +19,7 @@ const pkcs11Enabled = false
// FactoryOpts holds configuration information used to initialize factory implementations
type FactoryOpts struct {
ProviderName string `mapstructure:"default" json:"default" yaml:"Default"`
SwOpts *SwOpts `mapstructure:"SW,omitempty" json:"SW,omitempty" yaml:"SwOpts"`
SwOpts *SwOpts `mapstructure:"SW,omitempty" json:"SW,omitempty" yaml:"SW,omitempty"`
Pkcs11Opts *pkcs11.PKCS11Opts `mapstructure:"PKCS11,omitempty" json:"PKCS11,omitempty" yaml:"PKCS11"`
}

Expand Down
8 changes: 2 additions & 6 deletions bccsp/pkcs11/impl.go
Expand Up @@ -226,17 +226,13 @@ func (csp *impl) Decrypt(k bccsp.Key, ciphertext []byte, opts bccsp.DecrypterOpt
// This is a convenience function. Useful to self-configure, for tests where usual configuration is not
// available
func FindPKCS11Lib() (lib, pin, label string) {
//FIXME: Till we workout the configuration piece, look for the libraries in the familiar places
lib = os.Getenv("PKCS11_LIB")
if lib == "" {
pin = "98765432"
label = "ForFabric"
possibilities := []string{
"/usr/lib/softhsm/libsofthsm2.so", //Debian
"/usr/lib/x86_64-linux-gnu/softhsm/libsofthsm2.so", //Ubuntu
"/usr/lib/s390x-linux-gnu/softhsm/libsofthsm2.so", //Ubuntu
"/usr/lib/powerpc64le-linux-gnu/softhsm/libsofthsm2.so", //Power
"/usr/local/Cellar/softhsm/2.5.0/lib/softhsm/libsofthsm2.so", //MacOS
"/usr/lib/softhsm/libsofthsm2.so", //Debian
"/usr/lib/x86_64-linux-gnu/softhsm/libsofthsm2.so", //Ubuntu
}
for _, path := range possibilities {
if _, err := os.Stat(path); !os.IsNotExist(err) {
Expand Down
24 changes: 15 additions & 9 deletions common/ledger/blkstorage/blockindex.go
Expand Up @@ -9,7 +9,7 @@ package blkstorage
import (
"bytes"
"fmt"
"path"
"path/filepath"
"unicode/utf8"

"github.com/golang/protobuf/proto"
Expand Down Expand Up @@ -260,13 +260,6 @@ func (index *blockIndex) exportUniqueTxIDs(dir string, newHashFunc snapshot.NewH
return nil, ErrAttrNotIndexed
}

// create the data file
dataFile, err := snapshot.CreateFile(path.Join(dir, snapshotDataFileName), snapshotFileFormat, newHashFunc)
if err != nil {
return nil, err
}
defer dataFile.Close()

dbItr := index.db.GetIterator([]byte{txIDIdxKeyPrefix}, []byte{txIDIdxKeyPrefix + 1})
defer dbItr.Release()
if err := dbItr.Error(); err != nil {
Expand All @@ -275,6 +268,8 @@ func (index *blockIndex) exportUniqueTxIDs(dir string, newHashFunc snapshot.NewH

var previousTxID string
var numTxIDs uint64 = 0
var dataFile *snapshot.FileWriter
var err error
for dbItr.Next() {
if err := dbItr.Error(); err != nil {
return nil, errors.Wrap(err, "internal leveldb error while iterating for txids")
Expand All @@ -288,19 +283,30 @@ func (index *blockIndex) exportUniqueTxIDs(dir string, newHashFunc snapshot.NewH
continue
}
previousTxID = txID
if numTxIDs == 0 { // first iteration, create the data file
dataFile, err = snapshot.CreateFile(filepath.Join(dir, snapshotDataFileName), snapshotFileFormat, newHashFunc)
if err != nil {
return nil, err
}
defer dataFile.Close()
}
if err := dataFile.EncodeString(txID); err != nil {
return nil, err
}
numTxIDs++
}

if dataFile == nil {
return nil, nil
}

dataHash, err := dataFile.Done()
if err != nil {
return nil, err
}

// create the metadata file
metadataFile, err := snapshot.CreateFile(path.Join(dir, snapshotMetadataFileName), snapshotFileFormat, newHashFunc)
metadataFile, err := snapshot.CreateFile(filepath.Join(dir, snapshotMetadataFileName), snapshotFileFormat, newHashFunc)
if err != nil {
return nil, err
}
Expand Down
33 changes: 20 additions & 13 deletions common/ledger/blkstorage/blockindex_test.go
Expand Up @@ -12,7 +12,7 @@ import (
"hash"
"io/ioutil"
"os"
"path"
"path/filepath"
"testing"

"github.com/hyperledger/fabric-protos-go/common"
Expand Down Expand Up @@ -270,20 +270,27 @@ func TestExportUniqueTxIDs(t *testing.T) {
defer blkfileMgrWrapper.close()
blkfileMgr := blkfileMgrWrapper.blockfileMgr

bg, gb := testutil.NewBlockGenerator(t, "myChannel", false)
blkfileMgr.addBlock(gb)

testSnapshotDir := testPath()
defer os.RemoveAll(testSnapshotDir)

// empty store generates no output
fileHashes, err := blkfileMgr.index.exportUniqueTxIDs(testSnapshotDir, testNewHashFunc)
require.NoError(t, err)
require.Empty(t, fileHashes)
files, err := ioutil.ReadDir(testSnapshotDir)
require.NoError(t, err)
require.Len(t, files, 0)

// add genesis block and test the exported bytes
bg, gb := testutil.NewBlockGenerator(t, "myChannel", false)
blkfileMgr.addBlock(gb)
configTxID, err := protoutil.GetOrComputeTxIDFromEnvelope(gb.Data.Data[0])
require.NoError(t, err)
fileHashes, err := blkfileMgr.index.exportUniqueTxIDs(testSnapshotDir, testNewHashFunc)
fileHashes, err = blkfileMgr.index.exportUniqueTxIDs(testSnapshotDir, testNewHashFunc)
require.NoError(t, err)
verifyExportedTxIDs(t, testSnapshotDir, fileHashes, configTxID)
os.Remove(path.Join(testSnapshotDir, snapshotDataFileName))
os.Remove(path.Join(testSnapshotDir, snapshotMetadataFileName))
os.Remove(filepath.Join(testSnapshotDir, snapshotDataFileName))
os.Remove(filepath.Join(testSnapshotDir, snapshotMetadataFileName))

// add block-1 and test the exported bytes
block1 := bg.NextBlockWithTxid(
Expand All @@ -300,8 +307,8 @@ func TestExportUniqueTxIDs(t *testing.T) {
fileHashes, err = blkfileMgr.index.exportUniqueTxIDs(testSnapshotDir, testNewHashFunc)
require.NoError(t, err)
verifyExportedTxIDs(t, testSnapshotDir, fileHashes, "txid-1", "txid-2", "txid-3", configTxID) //"txid-1" appears once, Txids appear in radix sort order
os.Remove(path.Join(testSnapshotDir, snapshotDataFileName))
os.Remove(path.Join(testSnapshotDir, snapshotMetadataFileName))
os.Remove(filepath.Join(testSnapshotDir, snapshotDataFileName))
os.Remove(filepath.Join(testSnapshotDir, snapshotMetadataFileName))

// add block-2 and test the exported bytes
block2 := bg.NextBlockWithTxid(
Expand Down Expand Up @@ -351,7 +358,7 @@ func TestExportUniqueTxIDsErrorCases(t *testing.T) {
defer os.RemoveAll(testSnapshotDir)

// error during data file creation
dataFilePath := path.Join(testSnapshotDir, snapshotDataFileName)
dataFilePath := filepath.Join(testSnapshotDir, snapshotDataFileName)
_, err := os.Create(dataFilePath)
require.NoError(t, err)
_, err = blkfileMgrWrapper.blockfileMgr.index.exportUniqueTxIDs(testSnapshotDir, testNewHashFunc)
Expand All @@ -361,7 +368,7 @@ func TestExportUniqueTxIDsErrorCases(t *testing.T) {
// error during metadata file creation
fmt.Printf("testSnapshotDir=%s", testSnapshotDir)
require.NoError(t, os.MkdirAll(testSnapshotDir, 0700))
metadataFilePath := path.Join(testSnapshotDir, snapshotMetadataFileName)
metadataFilePath := filepath.Join(testSnapshotDir, snapshotMetadataFileName)
_, err = os.Create(metadataFilePath)
require.NoError(t, err)
_, err = blkfileMgrWrapper.blockfileMgr.index.exportUniqueTxIDs(testSnapshotDir, testNewHashFunc)
Expand All @@ -388,13 +395,13 @@ func verifyExportedTxIDs(t *testing.T, dir string, fileHashes map[string][]byte,
require.Contains(t, fileHashes, snapshotDataFileName)
require.Contains(t, fileHashes, snapshotMetadataFileName)

dataFile := path.Join(dir, snapshotDataFileName)
dataFile := filepath.Join(dir, snapshotDataFileName)
dataFileContent, err := ioutil.ReadFile(dataFile)
require.NoError(t, err)
dataFileHash := sha256.Sum256(dataFileContent)
require.Equal(t, dataFileHash[:], fileHashes[snapshotDataFileName])

metadataFile := path.Join(dir, snapshotMetadataFileName)
metadataFile := filepath.Join(dir, snapshotMetadataFileName)
metadataFileContent, err := ioutil.ReadFile(metadataFile)
require.NoError(t, err)
metadataFileHash := sha256.Sum256(metadataFileContent)
Expand Down
3 changes: 3 additions & 0 deletions common/ledger/snapshot/file.go
Expand Up @@ -98,6 +98,9 @@ func (c *FileWriter) Done() ([]byte, error) {
if err := c.bufWriter.Flush(); err != nil {
return nil, errors.Wrapf(err, "error while flushing to the snapshot file: %s ", c.file.Name())
}
if err := c.file.Sync(); err != nil {
return nil, err
}
if err := c.file.Close(); err != nil {
return nil, errors.Wrapf(err, "error while closing the snapshot file: %s ", c.file.Name())
}
Expand Down
59 changes: 47 additions & 12 deletions common/viperutil/config_test.go
Expand Up @@ -16,19 +16,20 @@ import (
"testing"

"github.com/Shopify/sarama"
"github.com/hyperledger/fabric/bccsp/factory"
"github.com/hyperledger/fabric/orderer/mocks/util"
"github.com/spf13/viper"
)

const Prefix = "VIPERUTIL"

type testSlice struct {
Inner struct {
Slice []string
func TestEnvSlice(t *testing.T) {
type testSlice struct {
Inner struct {
Slice []string
}
}
}

func TestEnvSlice(t *testing.T) {
envVar := "VIPERUTIL_INNER_SLICE"
envVal := "[a, b, c]"
os.Setenv(envVar, envVal)
Expand All @@ -49,9 +50,7 @@ func TestEnvSlice(t *testing.T) {
}

var uconf testSlice

err = EnhancedExactUnmarshal(config, &uconf)
if err != nil {
if err := EnhancedExactUnmarshal(config, &uconf); err != nil {
t.Fatalf("Failed to unmarshal with: %s", err)
}

Expand All @@ -62,7 +61,6 @@ func TestEnvSlice(t *testing.T) {
}

func TestKafkaVersionDecode(t *testing.T) {

type testKafkaVersion struct {
Inner struct {
Version sarama.KafkaVersion
Expand Down Expand Up @@ -405,7 +403,6 @@ func TestStringFromFileEnv(t *testing.T) {
}{
{"Override", "---\nInner:\n Single:\n File: wrong_file"},
{"NoFileElement", "---\nInner:\n Single:\n"},
// {"NoElementAtAll", "---\nInner:\n"}, test case for another time
}

for _, tc := range testCases {
Expand Down Expand Up @@ -439,7 +436,6 @@ func TestStringFromFileEnv(t *testing.T) {
}
})
}

}

func TestDecodeOpaqueField(t *testing.T) {
Expand All @@ -458,10 +454,49 @@ Hello:
Hello struct{ World int }
}
if err := EnhancedExactUnmarshal(config, &conf); err != nil {
t.Fatalf("Error unmashalling: %s", err)
t.Fatalf("Error unmarshalling: %s", err)
}

if conf.Foo != "bar" || conf.Hello.World != 42 {
t.Fatalf("Incorrect decoding")
}
}

func TestBCCSPDecodeHookOverride(t *testing.T) {
type testConfig struct {
BCCSP *factory.FactoryOpts
}
yaml := `
BCCSP:
Default: default-provider
SW:
Security: 999
`

config := viper.New()
config.SetEnvPrefix("VIPERUTIL")
config.AutomaticEnv()
replacer := strings.NewReplacer(".", "_")
config.SetEnvKeyReplacer(replacer)
config.SetConfigType("yaml")

overrideVar := "VIPERUTIL_BCCSP_SW_SECURITY"
os.Setenv(overrideVar, "1111")
defer os.Unsetenv(overrideVar)
if err := config.ReadConfig(strings.NewReader(yaml)); err != nil {
t.Fatalf("Error reading config: %s", err)
}

var tc testConfig
if err := EnhancedExactUnmarshal(config, &tc); err != nil {
t.Fatalf("Error unmarshaling: %s", err)
}

if tc.BCCSP == nil || tc.BCCSP.SwOpts == nil {
t.Fatalf("expected BCCSP.SW to be non-nil: %#v", tc)
}

if tc.BCCSP.SwOpts.SecLevel != 1111 {
t.Fatalf("expected BCCSP.SW.SecLevel to equal 1111 but was %v\n", tc.BCCSP.SwOpts.SecLevel)
}
}
3 changes: 2 additions & 1 deletion common/viperutil/config_util.go
Expand Up @@ -93,6 +93,7 @@ func getKeysRecursively(base string, getKey viperGetter, nodeKeys map[string]int

func unmarshalJSON(val interface{}) (map[string]string, bool) {
mp := map[string]string{}

s, ok := val.(string)
if !ok {
logger.Debugf("Unmarshal JSON: value is not a string: %v", val)
Expand Down Expand Up @@ -303,7 +304,7 @@ func bccspHook(f reflect.Type, t reflect.Type, data interface{}) (interface{}, e

config := factory.GetDefaultOpts()

err := mapstructure.Decode(data, config)
err := mapstructure.WeakDecode(data, config)
if err != nil {
return nil, errors.Wrap(err, "could not decode bcssp type")
}
Expand Down
1 change: 0 additions & 1 deletion core/chaincode/platforms/golang/platform.go
Expand Up @@ -490,7 +490,6 @@ func distributions() []dist {
// pre-populate linux architecutures
dists := map[dist]bool{
{goos: "linux", goarch: "amd64"}: true,
{goos: "linux", goarch: "s390x"}: true,
}

// add local OS and ARCH
Expand Down
2 changes: 1 addition & 1 deletion core/common/privdata/membershipinfo.go
Expand Up @@ -24,7 +24,7 @@ type MembershipProvider struct {

// NewMembershipInfoProvider returns MembershipProvider
func NewMembershipInfoProvider(mspID string, selfSignedData protoutil.SignedData, identityDeserializerFunc func(chainID string) msp.IdentityDeserializer) *MembershipProvider {
return &MembershipProvider{selfSignedData: selfSignedData, IdentityDeserializerFactory: identityDeserializerFunc}
return &MembershipProvider{mspID: mspID, selfSignedData: selfSignedData, IdentityDeserializerFactory: identityDeserializerFunc}
}

// AmMemberOf checks whether the current peer is a member of the given collection config.
Expand Down
19 changes: 16 additions & 3 deletions core/common/privdata/membershipinfo_test.go
Expand Up @@ -23,18 +23,20 @@ func TestMembershipInfoProvider(t *testing.T) {
Signature: []byte{1, 2, 3},
Data: []byte{4, 5, 6},
}
emptyPeerSelfSignedData := protoutil.SignedData{}

identityDeserializer := func(chainID string) msp.IdentityDeserializer {
return &mockDeserializer{}
}

// verify membership provider returns true
membershipProvider := NewMembershipInfoProvider(mspID, peerSelfSignedData, identityDeserializer)
// verify membership provider pass simple check returns true
membershipProvider := NewMembershipInfoProvider(mspID, emptyPeerSelfSignedData, identityDeserializer)
res, err := membershipProvider.AmMemberOf("test1", getAccessPolicy([]string{"peer0", "peer1"}))
assert.True(t, res)
assert.Nil(t, err)

// verify membership provider returns false
// verify membership provider fall back to default access policy evaluation returns false
membershipProvider = NewMembershipInfoProvider(mspID, peerSelfSignedData, identityDeserializer)
res, err = membershipProvider.AmMemberOf("test1", getAccessPolicy([]string{"peer2", "peer3"}))
assert.False(t, res)
assert.Nil(t, err)
Expand All @@ -48,6 +50,17 @@ func TestMembershipInfoProvider(t *testing.T) {
res, err = membershipProvider.AmMemberOf("test1", getBadAccessPolicy([]string{"signer0"}, 1))
assert.False(t, res)
assert.Nil(t, err)

// verify membership provider with empty mspID and fall back to default access policy evaluation returns true
membershipProvider = NewMembershipInfoProvider("", peerSelfSignedData, identityDeserializer)
res, err = membershipProvider.AmMemberOf("test1", getAccessPolicy([]string{"peer0", "peer1"}))
assert.True(t, res)
assert.Nil(t, err)

// verify membership provider with empty mspID and fall back to default access policy evaluation returns false
res, err = membershipProvider.AmMemberOf("test1", getAccessPolicy([]string{"peer2", "peer3"}))
assert.False(t, res)
assert.Nil(t, err)
}

func getAccessPolicy(signers []string) *peer.CollectionPolicyConfig {
Expand Down

0 comments on commit 4193c2b

Please sign in to comment.