Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Yahya/5005-fix-flakey-libp2p-test #99

Merged
merged 41 commits into from Oct 30, 2020
Merged
Show file tree
Hide file tree
Changes from 9 commits
Commits
Show all changes
41 commits
Select commit Hold shift + click to select a range
c7f603a
fixes peer manager non-blocking start
yhassanzadeh13 Oct 27, 2020
ddc375f
fixes peer manager assertion issue
yhassanzadeh13 Oct 27, 2020
d1f92e9
adds RequireConcurrentCallsReturnBefore
yhassanzadeh13 Oct 27, 2020
236baf2
refactors logs at debug level
yhassanzadeh13 Oct 27, 2020
e807c5b
fixes wait group negative count issue
yhassanzadeh13 Oct 27, 2020
d8800f7
adds error level logs
yhassanzadeh13 Oct 28, 2020
c54dc5e
Merge remote-tracking branch 'origin/master' into yahya/5005-fix-flak…
yhassanzadeh13 Oct 28, 2020
3e74856
adds free port function
yhassanzadeh13 Oct 28, 2020
cf8f6db
fixes libp2p tests
yhassanzadeh13 Oct 28, 2020
e0d6e9b
refactors unnecessary suite nested calls
yhassanzadeh13 Oct 28, 2020
b3b9441
adds more unittest utils
yhassanzadeh13 Oct 28, 2020
3b4cb6d
simplifies some tests
yhassanzadeh13 Oct 28, 2020
c4a6de8
adjusts timeouts
yhassanzadeh13 Oct 28, 2020
af9e5d6
fixes TestConcurrentOnDemandPeerUpdate
yhassanzadeh13 Oct 28, 2020
b4fe909
Update network/gossip/libp2p/libp2pNode_test.go
yhassanzadeh13 Oct 28, 2020
fe48531
Update network/gossip/libp2p/libp2pNode_test.go
yhassanzadeh13 Oct 28, 2020
553b5be
Update network/gossip/libp2p/peerManager.go
yhassanzadeh13 Oct 28, 2020
57aa665
fixes lint issues
yhassanzadeh13 Oct 28, 2020
dbc419f
makes allocated ports concurrency safe
yhassanzadeh13 Oct 28, 2020
7a87076
Update network/gossip/libp2p/libp2pNode_test.go
yhassanzadeh13 Oct 28, 2020
cc27eb3
fixes deferring closing streams
yhassanzadeh13 Oct 28, 2020
0572633
Merge remote-tracking branch 'origin/yahya/5005-fix-flakey-libp2p-tes…
yhassanzadeh13 Oct 28, 2020
ae48d42
fixes TestConcurrentOnDemandPeerUpdate
yhassanzadeh13 Oct 28, 2020
4ba7992
fixes log issue
yhassanzadeh13 Oct 28, 2020
7f73b52
adds error assertion
yhassanzadeh13 Oct 28, 2020
2beb7bf
fixes failure issue
yhassanzadeh13 Oct 28, 2020
976fd27
fixes lint issue
yhassanzadeh13 Oct 28, 2020
50dc9ab
fixes unittest error
yhassanzadeh13 Oct 28, 2020
431ede2
Update network/gossip/libp2p/peerManager_test.go
yhassanzadeh13 Oct 29, 2020
93b0f1b
Update network/gossip/libp2p/test/testUtil.go
yhassanzadeh13 Oct 29, 2020
24a8840
encapsulates port allocator test helper
yhassanzadeh13 Oct 29, 2020
9e2fc91
Merge remote-tracking branch 'origin/yahya/5005-fix-flakey-libp2p-tes…
yhassanzadeh13 Oct 29, 2020
cc4fad9
fixes concurrent access on mock run method
yhassanzadeh13 Oct 29, 2020
0305d6a
Merge remote-tracking branch 'origin/master' into yahya/5005-fix-flak…
yhassanzadeh13 Oct 29, 2020
c294613
updates peer manager with ready done aware
yhassanzadeh13 Oct 30, 2020
0eac3ad
Merge remote-tracking branch 'origin/master' into yahya/5005-fix-flak…
yhassanzadeh13 Oct 30, 2020
e32feea
updates go mod
yhassanzadeh13 Oct 30, 2020
dc96082
fixes small commented code
yhassanzadeh13 Oct 30, 2020
2dea2f8
fixes comments
yhassanzadeh13 Oct 30, 2020
14e5a0b
adds require return before
yhassanzadeh13 Oct 30, 2020
43390c6
Merge remote-tracking branch 'origin/master' into yahya/5005-fix-flak…
yhassanzadeh13 Oct 30, 2020
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
270 changes: 133 additions & 137 deletions network/gossip/libp2p/libp2pNode_test.go

Large diffs are not rendered by default.

18 changes: 13 additions & 5 deletions network/gossip/libp2p/peerManager.go
Expand Up @@ -3,6 +3,7 @@ package libp2p
import (
"context"
"fmt"
"sync"
"time"

"github.com/hashicorp/go-multierror"
Expand Down Expand Up @@ -53,14 +54,21 @@ func NewPeerManager(ctx context.Context, logger zerolog.Logger, idsProvider func
}

// Start kicks off the ambient periodic connection updates
// It gets blocking till all connection updates are kicking off
yhassanzadeh13 marked this conversation as resolved.
Show resolved Hide resolved
func (pm *PeerManager) Start() error {
go pm.updateLoop()
go pm.periodicUpdate()
wg := &sync.WaitGroup{}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

❤️

wg.Add(2)

go pm.updateLoop(wg)
go pm.periodicUpdate(wg)

wg.Wait()
return nil
}
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A flakiness root cause: Previously, Start was not a blocking call. However, tests running it were assuming otherwise. So, there was flakiness due to assuming updateLoop and periodicUpdate are running right after Start returns, which was not always the case.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also, per conversation with @vishalchangrani we plan to make PeerManager a ReadyDoneAware module: https://github.com/dapperlabs/flow-go/issues/5011

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm a bit uneasy with this as the resolution of the flakiness. Both these goroutines (with an exception in the next paragraph) block immediately after they begin running by entering a select statement. As far as I understand the runtime's scheduling behaviour, this goroutine state (blocked on a select statement after having been previously scheduled to run on a thread -- let's call it waiting_after_initial_execution) is more or less equivalent to the state where the goroutine has been created but not yet scheduled to run on a thread (let's call it waiting_before_initial_execution). Using the waitgroup here essentially forces these two goroutines into waiting_after_initial_execution rather than waiting_before_initial_execution.

The exception I mentioned earlier is essentially that by forcing these goroutines to be executed, for the periodicUpdate method in particular, we also force it to invoke pm.RequestPeerUpdate(). I suspect that this might be what is really addressing the flakiness. If this is true I would suggest we simply directly invoke pm.RequestPeerUpdate() in the main Start method and remove the waitgroup altogether.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed in c294613


// updateLoop triggers an update peer request when it has been requested
func (pm *PeerManager) updateLoop() {
func (pm *PeerManager) updateLoop(wg *sync.WaitGroup) {
wg.Done()
for {
select {
case <-pm.ctx.Done():
Expand All @@ -72,8 +80,8 @@ func (pm *PeerManager) updateLoop() {
}

// updateLoop request periodic connection update
func (pm *PeerManager) periodicUpdate() {

func (pm *PeerManager) periodicUpdate(wg *sync.WaitGroup) {
wg.Done()
// request initial discovery
pm.RequestPeerUpdate()

Expand Down
163 changes: 92 additions & 71 deletions network/gossip/libp2p/peerManager_test.go
Expand Up @@ -10,6 +10,7 @@ import (
"github.com/rs/zerolog"
"github.com/stretchr/testify/assert"
testifymock "github.com/stretchr/testify/mock"
"github.com/stretchr/testify/require"
"github.com/stretchr/testify/suite"

"github.com/onflow/flow-go/model/flow"
Expand All @@ -29,14 +30,14 @@ func TestPeerManagerTestSuite(t *testing.T) {
suite.Run(t, new(PeerManagerTestSuite))
}

func (ts *PeerManagerTestSuite) SetupTest() {
ts.log = ts.log.Output(zerolog.ConsoleWriter{Out: os.Stderr}).With().Caller().Logger()
ts.ctx = context.Background()
func (suite *PeerManagerTestSuite) SetupTest() {
suite.log = zerolog.New(os.Stderr).Level(zerolog.ErrorLevel)
suite.ctx = context.Background()
}

// TestUpdatePeers tests that updatePeers calls the connector with the expected list of ids to connect and disconnect
// from. The tests are cumulative and ordered.
func (ts *PeerManagerTestSuite) TestUpdatePeers() {
func (suite *PeerManagerTestSuite) TestUpdatePeers() {

// create some test ids
currentIDs := unittest.IdentityListFixture(10)
Expand All @@ -51,59 +52,59 @@ func (ts *PeerManagerTestSuite) TestUpdatePeers() {

// create the connector mock to check ids requested for connect and disconnect
connector := new(mock.Connector)
connector.On("ConnectPeers", ts.ctx, testifymock.AnythingOfType("flow.IdentityList")).
connector.On("ConnectPeers", suite.ctx, testifymock.AnythingOfType("flow.IdentityList")).
Run(func(args testifymock.Arguments) {
idArg := args[1].(flow.IdentityList)
assertListsEqual(ts.T(), currentIDs, idArg)
assertListsEqual(suite.T(), currentIDs, idArg)
}).
Return(nil)
connector.On("DisconnectPeers", ts.ctx, testifymock.AnythingOfType("flow.IdentityList")).
connector.On("DisconnectPeers", suite.ctx, testifymock.AnythingOfType("flow.IdentityList")).
Run(func(args testifymock.Arguments) {
idArg := args[1].(flow.IdentityList)
assertListsEqual(ts.T(), extraIDs, idArg)
assertListsEqual(suite.T(), extraIDs, idArg)
// assert that ids passed to disconnect have no id in common with those passed to connect
assertListsDisjoint(ts.T(), currentIDs, extraIDs)
assertListsDisjoint(suite.T(), currentIDs, extraIDs)
}).
Return(nil)

// create the peer manager (but don't start it)
pm := NewPeerManager(ts.ctx, ts.log, idProvider, connector)
pm := NewPeerManager(suite.ctx, suite.log, idProvider, connector)

// very first call to updatepeer
ts.Run("updatePeers only connects to all peers the first time", func() {
suite.Run("updatePeers only connects to all peers the first time", func() {

pm.updatePeers()

connector.AssertNumberOfCalls(ts.T(), "ConnectPeers", 1)
connector.AssertNotCalled(ts.T(), "DisconnectPeers")
connector.AssertNumberOfCalls(suite.T(), "ConnectPeers", 1)
connector.AssertNotCalled(suite.T(), "DisconnectPeers")
})

// a subsequent call to updatepeer should request a connect to existing ids and new ids
ts.Run("updatePeers connects to old and new peers", func() {
suite.Run("updatePeers connects to old and new peers", func() {
// create a new id
newIDs := unittest.IdentityListFixture(1)
currentIDs = append(currentIDs, newIDs...)

pm.updatePeers()

connector.AssertNumberOfCalls(ts.T(), "ConnectPeers", 2)
connector.AssertNotCalled(ts.T(), "DisconnectPeers")
connector.AssertNumberOfCalls(suite.T(), "ConnectPeers", 2)
connector.AssertNotCalled(suite.T(), "DisconnectPeers")
})

// when ids are excluded, they should be requested to be disconnected
ts.Run("updatePeers disconnects from extra peers", func() {
suite.Run("updatePeers disconnects from extra peers", func() {
// delete an id
extraIDs = currentIDs.Sample(1)
currentIDs = currentIDs.Filter(filter.Not(filter.In(extraIDs)))

pm.updatePeers()

connector.AssertNumberOfCalls(ts.T(), "ConnectPeers", 3)
connector.AssertNumberOfCalls(ts.T(), "DisconnectPeers", 1)
connector.AssertNumberOfCalls(suite.T(), "ConnectPeers", 3)
connector.AssertNumberOfCalls(suite.T(), "DisconnectPeers", 1)
})

// addition and deletion of ids should result in appropriate connect and disconnect calls
ts.Run("updatePeers connects to new peers and disconnects from extra peers", func() {
suite.Run("updatePeers connects to new peers and disconnects from extra peers", func() {
// remove a couple of ids
extraIDs = currentIDs.Sample(2)
currentIDs = currentIDs.Filter(filter.Not(filter.In(extraIDs)))
Expand All @@ -114,108 +115,128 @@ func (ts *PeerManagerTestSuite) TestUpdatePeers() {

pm.updatePeers()

connector.AssertNumberOfCalls(ts.T(), "ConnectPeers", 4)
connector.AssertNumberOfCalls(ts.T(), "DisconnectPeers", 2)
connector.AssertNumberOfCalls(suite.T(), "ConnectPeers", 4)
connector.AssertNumberOfCalls(suite.T(), "DisconnectPeers", 2)
})
}

// TestPeriodicPeerUpdate tests that the peermanager runs periodically
func (ts *PeerManagerTestSuite) TestPeriodicPeerUpdate() {
// TestPeriodicPeerUpdate tests that the peer manager runs periodically
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍

func (suite *PeerManagerTestSuite) TestPeriodicPeerUpdate() {
currentIDs := unittest.IdentityListFixture(10)
idProvider := func() (flow.IdentityList, error) {
return currentIDs, nil
}

connector := new(mock.Connector)
connector.On("ConnectPeers", ts.ctx, testifymock.Anything).Return(nil)
connector.On("DisconnectPeers", ts.ctx, testifymock.Anything).Return(nil)
pm := NewPeerManager(ts.ctx, ts.log, idProvider, connector)
wg := &sync.WaitGroup{} // keeps track of number of calls on `ConnectPeers`
count, times := 0, 2 // we expect it to be called twice at least
jordanschalm marked this conversation as resolved.
Show resolved Hide resolved
wg.Add(times)
connector.On("ConnectPeers", suite.ctx, testifymock.Anything).Run(func(args testifymock.Arguments) {
if count < times {
count++
wg.Done()
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To avoid negative value on WaitGroup.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

From your comment below:

Due to the asynchrony among assert.Eventually and peer update goroutine...

Will there be multiple goroutines concurrently calling ConnectPeers? If so I think we need to avoid concurrent writes to count in the Run function.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed in cc4fad9.

}
}).Return(nil)
connector.On("DisconnectPeers", suite.ctx, testifymock.Anything).Return(nil)
pm := NewPeerManager(suite.ctx, suite.log, idProvider, connector)

PeerUpdateInterval = 5 * time.Millisecond
err := pm.Start()
assert.NoError(ts.T(), err)
assert.Eventually(ts.T(), func() bool {
return connector.AssertNumberOfCalls(ts.T(), "ConnectPeers", 2)
}, 2*PeerUpdateInterval+4*time.Millisecond, 2*PeerUpdateInterval)
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A root cause of flakiness: in this test we are checking whether we receive at least two calls on ConnectPeers or not? However, the syntax checks for the exact two calls on it. Due to the asynchrony among assert.Eventually and peer update goroutine, it was sometimes the case that the method was getting called 3 times, and hence would fail the test.

assert.NoError(suite.T(), err)

unittest.RequireReturnsBefore(suite.T(), wg.Wait, 2*PeerUpdateInterval,
"ConnectPeers is not running on UpdateIntervals")
}

// TestOnDemandPeerUpdate tests that the a peer update can be requested on demand and in between the periodic runs
func (ts *PeerManagerTestSuite) TestOnDemandPeerUpdate() {
func (suite *PeerManagerTestSuite) TestOnDemandPeerUpdate() {
currentIDs := unittest.IdentityListFixture(10)
idProvider := func() (flow.IdentityList, error) {
return currentIDs, nil
}

connector := new(mock.Connector)
connector.On("ConnectPeers", ts.ctx, testifymock.Anything).Return(nil)
connector.On("DisconnectPeers", ts.ctx, testifymock.Anything).Return(nil)
pm := NewPeerManager(ts.ctx, ts.log, idProvider, connector)

// chooses peer interval rate deliberately long to capture on demand peer update
PeerUpdateInterval = time.Hour
err := pm.Start()
assert.NoError(ts.T(), err)

// wait for the first periodic update initiated after start to complete
assert.Eventually(ts.T(), func() bool {
return connector.AssertNumberOfCalls(ts.T(), "ConnectPeers", 1)
}, 10*time.Millisecond, 1*time.Millisecond)
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not exactly the same cause but similar to the above.


// make a request for peer update
// creates mock connector
wg := &sync.WaitGroup{} // keeps track of number of calls on `ConnectPeers`
times, count := 2, 0 // we expect it to be called twice overall
wg.Add(1) // this accounts for one invocation, the other invocation is subsequent
connector := new(mock.Connector)
// captures the first periodic update initiated after start to complete
connector.On("ConnectPeers", suite.ctx, testifymock.Anything).Run(func(args testifymock.Arguments) {
if count < times {
count++
wg.Done()
}
}).Return(nil)
connector.On("DisconnectPeers", suite.ctx, testifymock.Anything).Return(nil)
pm := NewPeerManager(suite.ctx, suite.log, idProvider, connector)

// starts peer manager and waits for first periodic update
require.NoError(suite.T(), pm.Start())
unittest.RequireReturnsBefore(suite.T(), wg.Wait, 10*time.Second,
"ConnectPeers is not running on startup")

// makes a request for peer update
wg.Add(1) // expects a call to `ConnectPeers` by requesting peer update
pm.RequestPeerUpdate()

// assert that a call to connect to peers is made
assert.Eventually(ts.T(), func() bool {
return connector.AssertNumberOfCalls(ts.T(), "ConnectPeers", 2)
}, 10*time.Millisecond, 1*time.Millisecond)
unittest.RequireReturnsBefore(suite.T(), wg.Wait, 1*time.Millisecond,
"ConnectPeers is not running on request")
}

// TestConcurrentOnDemandPeerUpdate tests that concurrent on-demand peer update request never block
func (ts *PeerManagerTestSuite) TestConcurrentOnDemandPeerUpdate() {
func (suite *PeerManagerTestSuite) TestConcurrentOnDemandPeerUpdate() {
currentIDs := unittest.IdentityListFixture(10)
idProvider := func() (flow.IdentityList, error) {
return currentIDs, nil
}

ctx, cancel := context.WithCancel(ts.ctx)
ctx, cancel := context.WithCancel(suite.ctx)
defer cancel()

connector := new(mock.Connector)
// connectPeerGate channel gates the return of the connector
connectPeerGate := make(chan time.Time)
defer close(connectPeerGate)
connector.On("ConnectPeers", ctx, testifymock.Anything).Return(nil).WaitUntil(connectPeerGate)
connector.On("DisconnectPeers", ctx, testifymock.Anything).Return(nil)

pm := NewPeerManager(ctx, ts.log, idProvider, connector)
// creates mock connector
wg := &sync.WaitGroup{} // keeps track of number of calls on `ConnectPeers`
// we expect it to be called twice, i.e.,
// one for periodic update and one for the on-demand request
count, times := 0, 2
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think in this test we should not follow this pattern since in this we want to make sure that there were never more than 2 calls made. In this test I was trying to test that even if there are multiple concurrent RequestPeerUpdate calls made, only one is executed and the others are noop. If the count goes beyond 2 then it means that RequestPeerUpdate does not meet the expectation.
Here we can be assured that regardless of how the CI runs these routines, the blocked ConnectPeers return will make sure that only one call returns.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks, fixed in af9e5d6.

wg.Add(times)
connector.On("ConnectPeers", ctx, testifymock.Anything).Run(func(args testifymock.Arguments) {
if count < times {
count++
wg.Done()
}
}).Return(nil).
WaitUntil(connectPeerGate) // blocks call for connectPeerGate channel
connector.On("DisconnectPeers", ctx, testifymock.Anything).Return(nil)
pm := NewPeerManager(ctx, suite.log, idProvider, connector)

// set the periodic interval to a high value so that periodic runs don't interfere with this test
PeerUpdateInterval = time.Hour

// start the peer manager
// this should trigger the first update and which will block on the ConnectPeers to return
err := pm.Start()
assert.NoError(ts.T(), err)

// make 10 concurrent request for peer update
wg := sync.WaitGroup{}
for i := 0; i < 10; i++ {
wg.Add(1)
go func() {
pm.RequestPeerUpdate()
wg.Done()
}()
}
assert.NoError(suite.T(), err)

// assert that none of the request is blocked even if update is blocked
unittest.AssertReturnsBefore(ts.T(), wg.Wait, time.Second)
// makes 10 concurrent request for peer update
unittest.RequireConcurrentCallsReturnBefore(suite.T(), pm.RequestPeerUpdate, 10, time.Second,
"concurrent peer update requests could not return on time")

// allow the first update to finish
// allow the first and second updates to finish
yhassanzadeh13 marked this conversation as resolved.
Show resolved Hide resolved
connectPeerGate <- time.Now()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

An extra return could also mean that we have either:
periodic run, first on demand run OR periodic run, first on demand, second on demand.
the latter should never happen. Hence, I would just let everyone block and then just open the gate once and count the calls.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

right, fixed in ae48d42.

connectPeerGate <- time.Now()

// assert that only two calls to ConnectPeers were made (one for periodic update and one for the on-demand request)
assert.Eventually(ts.T(), func() bool {
return connector.AssertNumberOfCalls(ts.T(), "ConnectPeers", 2)
}, 10*time.Millisecond, 1*time.Millisecond)
// requires two calls to ConnectPeers were made
unittest.RequireReturnsBefore(suite.T(), wg.Wait, 10*time.Second,
"required invocations on ConnectedPeers did not happen")
}

// assertListsEqual asserts that two identity list are equal ignoring the order
Expand Down
6 changes: 2 additions & 4 deletions network/gossip/libp2p/test/echoengine_test.go
Expand Up @@ -8,9 +8,7 @@ import (
"testing"
"time"

golog "github.com/ipfs/go-log"
"github.com/rs/zerolog"
"github.com/rs/zerolog/log"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"github.com/stretchr/testify/suite"
Expand Down Expand Up @@ -42,8 +40,8 @@ func TestStubEngineTestSuite(t *testing.T) {

func (s *EchoEngineTestSuite) SetupTest() {
const count = 2
golog.SetAllLoggers(golog.LevelError)
logger := log.Output(zerolog.ConsoleWriter{Out: os.Stderr}).With().Caller().Logger()

logger := zerolog.New(os.Stderr).Level(zerolog.ErrorLevel)
s.ids, s.mws, s.nets = generateIDsMiddlewaresNetworks(s.T(), count, logger, 100, nil, false)
}

Expand Down
7 changes: 1 addition & 6 deletions network/gossip/libp2p/test/epochtransition_test.go
Expand Up @@ -8,9 +8,7 @@ import (
"testing"
"time"

golog "github.com/ipfs/go-log"
"github.com/rs/zerolog"
"github.com/rs/zerolog/log"
"github.com/stretchr/testify/assert"
testifymock "github.com/stretchr/testify/mock"
"github.com/stretchr/testify/require"
Expand Down Expand Up @@ -48,8 +46,7 @@ func TestEpochTransitionTestSuite(t *testing.T) {
func (ts *EpochTransitionTestSuite) SetupTest() {
rand.Seed(time.Now().UnixNano())
nodeCount := 10
golog.SetAllLoggers(golog.LevelError)
ts.logger = log.Output(zerolog.ConsoleWriter{Out: os.Stderr}).With().Caller().Logger()
ts.logger = zerolog.New(os.Stderr).Level(zerolog.ErrorLevel)

// create ids
ids, mws := generateIDsAndMiddlewares(ts.T(), nodeCount, ts.logger)
Expand Down Expand Up @@ -106,7 +103,6 @@ func (ts *EpochTransitionTestSuite) TearDownTest() {
// TestNewNodeAdded tests that an additional node in the next epoch gets connected to other nodes and can exchange messages
// in the current epoch
func (ts *EpochTransitionTestSuite) TestNewNodeAdded() {

// create the id, middleware and network for a new node
ids, mws, nets := generateIDsMiddlewaresNetworks(ts.T(), 1, ts.logger, 100, nil, false)
newMiddleware := mws[0]
Expand Down Expand Up @@ -143,7 +139,6 @@ func (ts *EpochTransitionTestSuite) TestNewNodeAdded() {

// TestNodeRemoved tests that a node that is removed in the next epoch remains connected for the current epoch
func (ts *EpochTransitionTestSuite) TestNodeRemoved() {

// choose a random node to remove
removeIndex := rand.Intn(len(ts.ids))
removedID := ts.ids[removeIndex]
Expand Down