Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Switch to go-kit log #3061

Closed
wants to merge 3 commits into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
The table of contents is too big for display.
Diff view
Diff view
  •  
  •  
  •  
90 changes: 52 additions & 38 deletions cmd/prometheus/main.go
Original file line number Diff line number Diff line change
Expand Up @@ -26,17 +26,21 @@ import (
"syscall"
"time"

k8s_runtime "k8s.io/apimachinery/pkg/util/runtime"

"github.com/asaskevich/govalidator"
"github.com/go-kit/kit/log"
"github.com/go-kit/kit/log/level"
"github.com/pkg/errors"
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/common/log"
"github.com/prometheus/common/model"
"github.com/prometheus/common/version"
"golang.org/x/net/context"
"gopkg.in/alecthomas/kingpin.v2"

"github.com/prometheus/prometheus/config"
"github.com/prometheus/prometheus/notifier"
"github.com/prometheus/prometheus/pkg/promlog"
"github.com/prometheus/prometheus/promql"
"github.com/prometheus/prometheus/retrieval"
"github.com/prometheus/prometheus/rules"
Expand Down Expand Up @@ -80,8 +84,7 @@ func main() {

prometheusURL string

logFormat string
logLevel string
logLevel promlog.AllowedLevel
}{
notifier: notifier.Options{
Registerer: prometheus.DefaultRegisterer,
Expand All @@ -94,13 +97,8 @@ func main() {

a.HelpFlag.Short('h')

a.Flag("log.level",
"Only log messages with the given severity or above. One of: [debug, info, warn, error, fatal]").
Default("info").StringVar(&cfg.logLevel)

a.Flag("log.format",
`Set the log target and format. Example: "logger:syslog?appname=bob&local=7" or "logger:stdout?json=true"`).
Default("logger:stderr").StringVar(&cfg.logFormat)
a.Flag(promlog.LevelFlagName, promlog.LevelFlagHelp).
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This line of code doesn't belong in every single exporter, this should be possible by a function in a common library - just as we're currently doing with the current log library and kingpin.

The only change an exporter using this should need to pick up new flags (such as if we add back json) is to update vendoring.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I disagree here. This would require our logging package to depend on our chosen flag package, which is very unnecessary.
Plugging basic things together in main is completely fine.

Those unnecessary cross-dependencies tend to propagate and you end up with all those packages importing each other.

Example: we switched to a different flag package in prometheus/prometheus. All other applications should to, but it will propagate gradually.

Copy link
Contributor Author

@fabxc fabxc Aug 12, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In a more general sense, as it keeps coming back as a point we disagree on: not every character of code duplication is bad. Abstracting every trivial fragment into some package makes our code bases rigid and changes significantly more painful. At the same time it does not actually do a very good job in making things more unified.

Consider it Go (or programming in general) best practice.

A little copying is better than a little dependency

https://go-proverbs.github.io/

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This would require our logging package to depend on our chosen flag package, which is very unnecessary.

That would be the simplest and most obvious way to do it, but it's not the only way.

In a more general sense, as it keeps coming back as a point we disagree on: not every character of code duplication is bad

I agree in general. There's a little one-off copying, and then there's a little copying that would regularly need to be recopied into tens if not hundreds of exporters. This is the latter class.

This fragment may be trivial now, but what happens as more flags are added?

Example: we switched to a different flag package in prometheus/prometheus. All other applications should to, but it will propagate gradually.

That's a different class of change, it's an intrusive labour intensive change that's unavoidable if you want to change to a different library.

My problem with your proposal is that for every new feature to the log library, it takes what could be a trivial change for library users and changes it instead into a labour intensive change.

Abstracting every trivial fragment into some package makes our case bases rigid and changes significantly more painful. At the same time it does not actually do a very good job in making things more unified.

I disagree, having the names and meanings of our flags in a package that exporters can import and call ~1 function for makes changes far easier, and ensures that all exporters can easily keep up to date and in sync.

We already have a standard low-effort way of managing that, and that's vendoring. If adding flags or new output formats to the logging library requires anything beyond updating vendoring, then that's the wrong design as it's creating unnecessary effort for tens to hundreds of developers.

Copy link
Contributor Author

@fabxc fabxc Aug 12, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That would be the simplest and most obvious way to do it, but it's not the only way.

Yes, the alternative is to define some RegisterFlags interface that we can pass in. But from your general position I take that you will also object that every component defines a helper type to implement that interface around their flag library.
Hence, the only solution is that we also wrap our canonical flag package, given us yet more maintenance burden.

One crucial thing that is not being considered (and which could also bite us one day with extracting a bunch of small config types), is that we are offloading things that are part of various components' stability guarantees into a package that is imported by many.
This means that components that cannot break anything right now, are stuck in a limbo where they cannot update such a package, even with crucial bug/security fixes, because this update may also pull in a breakage.

The amount of manual(!) effort this requires to not accidentally do this or actually forking/porting partial important fixes temporarily (=~ up to years) is significantly higher than acknowledging that there's a new logging option and adding a flag for it.
The constants we have ensure the important parts of uniformity.

instead into a labour intensive change.

It's hardly any more effort than updating the vendoring and per my above reasoning, this even should be a manual and conscious step and not a side effect of vendoring.
The fact that we are blindly trusting those constants for our stable flags is the maximum I'm willing to do.

I heard all your arguments, but I will strongly object what's proposed here overall.
I'm saying that because I initiated everything that's the common repository today and in the long run, it has caused nothing but trouble. It was one of my major mistakes in regards to our code base.
It's a very expensive solution that only pays of at a development throughput which we do not remotely have today and will in all likelihood never have.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've been thinking a bit about a middle ground.
The issue of easily introducing breaking changes and having to introduce weird forks is fixed. For many cases there is simply no significantly better way than just duplicating some code, and that's fine IMO.

For the problem of plugging together the core plumbing here across components: I think the right solution is to write a skeleton generator that creates the basic Go files. That would be generally quite useful for people to quickly write their own exporters.
We can than add new flags etc. to the generated files. Then the generator can just be re-run against a repo and one can easily verify potential breaking changes by running git diff and remove them if necessary.
There should be generally no need to modify the generated structures significantly so no hard-to-resolve diffs should occur.

That seems the right tooling to solve the problem here and generally allows us to easily apply such changes to a wide range of repos.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

But from your general position I take that you will also object that every component defines a helper type to implement that interface around their flag library.

Actually I'd be okay with that. The goal is that you only have to put in the code to support "Prometheus logging" once, and everything after that is just updating vendoring. Whether that boilerplate code is 1 line or 10 isn't that important, though shorter is better.

What I'm not okay with is the boilerplate changing.

This means that components that cannot break anything right now, are stuck in a limbo where they cannot update such a package, even with crucial bug/security fixes, because this update may also pull in a breakage.

This doesn't change with your proposal. It just moves that pain around.

It's hardly any more effort than updating the vendoring and per my above reasoning, this even should be a manual and conscious step and not a side effect of vendoring.

It is notable effort, as it requires intelligence to deal with. I'm against this requiring any conscious thought in the general case, as that means in practice that noone would ever update. Or maybe they'd choose another library that doesn't impose a maintenance burden every time it gets a new feature, and we end up with inconsistency in the ecosystem.

The amount of manual(!) effort this requires to not accidentally do this or actually forking/porting partial important fixes temporarily (=~ up to years) is significantly higher than acknowledging that there's a new logging option and adding a flag for it.

Significantly higher effort for whom? We should be considering not just our effort (most of which we're going to have to do anyway), but the knock on effects for developers in the ecosystem using this library and end users. I don't accept the idea that hundreds of developers having to manually review every change to a dependency and update their code for it is less effort than us making a library that doesn't require that review&update.

For me this is just the standard cost of common code, which we will be taking on no matter what we decide. I believe the issues you're concerned about are irreducible complexity, so they'll just show up to bite us and the ecosystem in another form. What I propose makes this cost more manageable.

The constants we have ensure the important parts of uniformity.

I personally don't like these constants at all, as they add indirection that makes it harder to figure out where and how flags are actually defined.

I'm saying that because I initiated everything that's the common repository today and in the long run, it has caused nothing but trouble. It was one of my major mistakes in regards to our code base.

I think common has generally worked fine, and is widely used. Some of the code in some of the packages may not be the best, but as common libraries go it meets my expectations.

It's a very expensive solution that only pays of at a development throughput which we do not remotely have today and will in all likelihood never have.

I'm not seeing this high expense, or why development throughput is related.

From my standpoint one application depending on the internals of another is verboten. The instant that is about to happen, that code should be instead moved out to a common library (which is usually somewhere in prometheus/common for us). Keeping non-trivial (and in many cases trivial) copied&pasted code in sync across 2 applications is not really practical, let alone across tens to hundreds.

Putting common code in the Prometheus repo doesn't stop it from being a common library or change any of the maintenance burdens that having common libraries implies. It just muddles the boundaries between what is considered internal and what is considered external to Prometheus.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the right solution is to write a skeleton generator that creates the basic Go files. That would be generally quite useful for people to quickly write their own exporters.

I think that'd be useful in general, and it's already somewhere in my backlog of ideas.

We can than add new flags etc. to the generated files. Then the generator can just be re-run against a repo and one can easily verify potential breaking changes by running git diff and remove them if necessary.
There should be generally no need to modify the generated structures significantly so no hard-to-resolve diffs should occur.

This has been done countless times before. From everything I've seen and read, the second you modify the generated files this breaks down. For example this works for proto and lex files, but not for Ruby on Rails or maven projects.

I think a library function is the way to go here. We can recommend/generate boilerplate that does the plumbing with that function, but the boilerplate itself should very rarely change.

Default("info").SetValue(&cfg.logLevel)

a.Flag("config.file", "Prometheus configuration file path.").
Default("prometheus.yml").StringVar(&cfg.configFile)
Expand Down Expand Up @@ -197,13 +195,20 @@ func main() {

cfg.queryEngine.Timeout = time.Duration(cfg.queryTimeout)

logger := log.NewLogger(os.Stdout)
logger.SetLevel(cfg.logLevel)
logger.SetFormat(cfg.logFormat)
logger := promlog.New(cfg.logLevel)

// XXX(fabxc): The Kubernetes does background logging which we can only customize by modifying
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"The Kubernetes" -> "The Kubernetes client library"?

// a global variable.
// Ultimately, here is the best place to set it.
k8s_runtime.ErrorHandlers = []func(error){
func(err error) {
level.Error(log.With(logger, "component", "k8s_client_runtime")).Log("err", err)
},
}

logger.Infoln("Starting prometheus", version.Info())
logger.Infoln("Build context", version.BuildContext())
logger.Infoln("Host details", Uname())
level.Info(logger).Log("msg", "Starting prometheus", "version", version.Info())
level.Info(logger).Log("build_context", version.BuildContext())
level.Info(logger).Log("host_details", Uname())

var (
// sampleAppender = storage.Fanout{}
Expand All @@ -215,22 +220,30 @@ func main() {
hup := make(chan os.Signal)
hupReady := make(chan bool)
signal.Notify(hup, syscall.SIGHUP)
logger.Infoln("Starting tsdb")
localStorage, err := tsdb.Open(cfg.localStoragePath, prometheus.DefaultRegisterer, &cfg.tsdb)

level.Info(logger).Log("msg", "Starting TSDB")

localStorage, err := tsdb.Open(
cfg.localStoragePath,
log.With(logger, "component", "tsdb"),
prometheus.DefaultRegisterer,
&cfg.tsdb,
)
if err != nil {
log.Errorf("Opening storage failed: %s", err)
level.Error(logger).Log("msg", "Opening TSDB failed", "err", err)
os.Exit(1)
}
logger.Infoln("tsdb started")

remoteStorage := &remote.Storage{}
level.Info(logger).Log("msg", "TSDB succesfully started")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

succesfully -> successfully


remoteStorage := remote.NewStorage(log.With(logger, "component", "remote"))
reloadables = append(reloadables, remoteStorage)
fanoutStorage := storage.NewFanout(tsdb.Adapter(localStorage), remoteStorage)
fanoutStorage := storage.NewFanout(logger, tsdb.Adapter(localStorage), remoteStorage)

cfg.queryEngine.Logger = logger
cfg.queryEngine.Logger = log.With(logger, "component", "query engine")
var (
notifier = notifier.New(&cfg.notifier, logger)
targetManager = retrieval.NewTargetManager(fanoutStorage, logger)
notifier = notifier.New(&cfg.notifier, log.With(logger, "component", "notifier"))
targetManager = retrieval.NewTargetManager(fanoutStorage, log.With(logger, "component", "target manager"))
queryEngine = promql.NewEngine(fanoutStorage, &cfg.queryEngine)
ctx, cancelCtx = context.WithCancel(context.Background())
)
Expand All @@ -241,7 +254,7 @@ func main() {
QueryEngine: queryEngine,
Context: ctx,
ExternalURL: cfg.web.ExternalURL,
Logger: logger,
Logger: log.With(logger, "component", "rule manager"),
})

cfg.web.Context = ctx
Expand All @@ -265,12 +278,12 @@ func main() {
cfg.web.Flags[f.Name] = f.Value.String()
}

webHandler := web.New(&cfg.web)
webHandler := web.New(log.With(logger, "componennt", "web"), &cfg.web)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

componennt -> component


reloadables = append(reloadables, targetManager, ruleManager, webHandler, notifier)

if err := reloadConfig(cfg.configFile, logger, reloadables...); err != nil {
logger.Errorf("Error loading config: %s", err)
level.Error(logger).Log("msg", "Error loading config", "err", err)
os.Exit(1)
}

Expand All @@ -283,11 +296,11 @@ func main() {
select {
case <-hup:
if err := reloadConfig(cfg.configFile, logger, reloadables...); err != nil {
logger.Errorf("Error reloading config: %s", err)
level.Error(logger).Log("msg", "Error reloading config", "err", err)
}
case rc := <-webHandler.Reload():
if err := reloadConfig(cfg.configFile, logger, reloadables...); err != nil {
logger.Errorf("Error reloading config: %s", err)
level.Error(logger).Log("msg", "Error reloading config", "err", err)
rc <- err
} else {
rc <- nil
Expand All @@ -299,7 +312,7 @@ func main() {
// Start all components. The order is NOT arbitrary.
defer func() {
if err := fanoutStorage.Close(); err != nil {
log.Errorln("Error stopping storage:", err)
level.Error(logger).Log("msg", "Closing storage(s) failed", "err", err)
}
}()

Expand Down Expand Up @@ -331,20 +344,20 @@ func main() {

// Set web server to ready.
webHandler.Ready()
log.Info("Server is Ready to receive requests.")
level.Info(logger).Log("msg", "Server is Ready to receive requests.")

term := make(chan os.Signal)
signal.Notify(term, os.Interrupt, syscall.SIGTERM)
select {
case <-term:
logger.Warn("Received SIGTERM, exiting gracefully...")
level.Warn(logger).Log("msg", "Received SIGTERM, exiting gracefully...")
case <-webHandler.Quit():
logger.Warn("Received termination request via web service, exiting gracefully...")
level.Warn(logger).Log("msg", "Received termination request via web service, exiting gracefully...")
case err := <-errc:
logger.Errorln("Error starting web server, exiting gracefully:", err)
level.Error(logger).Log("msg", "Error starting web server, exiting gracefully", "err", err)
}

logger.Info("See you next time!")
level.Info(logger).Log("msg", "See you next time!")
}

// Reloadable things can change their internal state to match a new config
Expand All @@ -354,7 +367,8 @@ type Reloadable interface {
}

func reloadConfig(filename string, logger log.Logger, rls ...Reloadable) (err error) {
logger.Infof("Loading configuration file %s", filename)
level.Info(logger).Log("msg", "Loading configuration file", "filename", filename)

defer func() {
if err == nil {
configSuccess.Set(1)
Expand All @@ -372,7 +386,7 @@ func reloadConfig(filename string, logger log.Logger, rls ...Reloadable) (err er
failed := false
for _, rl := range rls {
if err := rl.ApplyConfig(conf); err != nil {
logger.Error("Failed to apply configuration: ", err)
level.Error(logger).Log("msg", "Failed to apply configuration", "err", err)
failed = true
}
}
Expand Down
19 changes: 12 additions & 7 deletions discovery/azure/azure.go
Original file line number Diff line number Diff line change
Expand Up @@ -23,8 +23,9 @@ import (
"github.com/Azure/azure-sdk-for-go/arm/network"
"github.com/Azure/go-autorest/autorest/azure"

"github.com/go-kit/kit/log"
"github.com/go-kit/kit/log/level"
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/common/log"
"github.com/prometheus/common/model"
"golang.org/x/net/context"

Expand Down Expand Up @@ -71,6 +72,9 @@ type Discovery struct {

// NewDiscovery returns a new AzureDiscovery which periodically refreshes its targets.
func NewDiscovery(cfg *config.AzureSDConfig, logger log.Logger) *Discovery {
if logger == nil {
logger = log.NewNopLogger()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure logging nowhere if a developer forgets to pass in a logger is the most robust choice. I'd argue it's better to send logs somewhere rather than nowhere if there's a production problem being debugged.

Copy link
Contributor Author

@fabxc fabxc Aug 12, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is Go best practice. Passing a nil logger is an explicit signal that we do not want anything to be logged. The NopLogger default is used to not guard all calls to the logger with an if.
If people want logging, they should pass in a logger.

}
return &Discovery{
cfg: cfg,
interval: time.Duration(cfg.RefreshInterval),
Expand All @@ -93,7 +97,7 @@ func (d *Discovery) Run(ctx context.Context, ch chan<- []*config.TargetGroup) {

tg, err := d.refresh()
if err != nil {
d.logger.Errorf("unable to refresh during Azure discovery: %s", err)
level.Error(d.logger).Log("msg", "Unable to refresh during Azure discovery", "err", err)
} else {
select {
case <-ctx.Done():
Expand Down Expand Up @@ -149,7 +153,7 @@ func newAzureResourceFromID(id string, logger log.Logger) (azureResource, error)
s := strings.Split(id, "/")
if len(s) != 9 {
err := fmt.Errorf("invalid ID '%s'. Refusing to create azureResource", id)
logger.Error(err)
level.Error(logger).Log("err", err)
return azureResource{}, err
}
return azureResource{
Expand All @@ -159,6 +163,8 @@ func newAzureResourceFromID(id string, logger log.Logger) (azureResource, error)
}

func (d *Discovery) refresh() (tg *config.TargetGroup, err error) {
defer level.Debug(d.logger).Log("msg", "Azure discovery completed")

t0 := time.Now()
defer func() {
azureSDRefreshDuration.Observe(time.Since(t0).Seconds())
Expand Down Expand Up @@ -187,7 +193,7 @@ func (d *Discovery) refresh() (tg *config.TargetGroup, err error) {
}
machines = append(machines, *result.Value...)
}
d.logger.Debugf("Found %d virtual machines during Azure discovery.", len(machines))
level.Debug(d.logger).Log("msg", "Found virtual machines during Azure discovery.", "count", len(machines))

// We have the slice of machines. Now turn them into targets.
// Doing them in go routines because the network interface calls are slow.
Expand Down Expand Up @@ -228,7 +234,7 @@ func (d *Discovery) refresh() (tg *config.TargetGroup, err error) {
}
networkInterface, err := client.nic.Get(r.ResourceGroup, r.Name, "")
if err != nil {
d.logger.Errorf("Unable to get network interface %s: %s", r.Name, err)
level.Error(d.logger).Log("msg", "Unable to get network interface", "name", r.Name, "err", err)
ch <- target{labelSet: nil, err: err}
// Get out of this routine because we cannot continue without a network interface.
return
Expand All @@ -239,7 +245,7 @@ func (d *Discovery) refresh() (tg *config.TargetGroup, err error) {
// yet support this. On deallocated machines, this value happens to be nil so it
// is a cheap and easy way to determine if a machine is allocated or not.
if networkInterface.Properties.Primary == nil {
d.logger.Debugf("Virtual machine %s is deallocated. Skipping during Azure SD.", *vm.Name)
level.Debug(d.logger).Log("msg", "Skipping deallocated virtual machine", "machine", *vm.Name)
ch <- target{}
return
}
Expand Down Expand Up @@ -274,6 +280,5 @@ func (d *Discovery) refresh() (tg *config.TargetGroup, err error) {
}
}

d.logger.Debugf("Azure discovery completed.")
return tg, nil
}
13 changes: 9 additions & 4 deletions discovery/consul/consul.go
Original file line number Diff line number Diff line change
Expand Up @@ -21,9 +21,10 @@ import (
"strings"
"time"

"github.com/go-kit/kit/log"
"github.com/go-kit/kit/log/level"
consul "github.com/hashicorp/consul/api"
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/common/log"
"github.com/prometheus/common/model"
"github.com/prometheus/prometheus/config"
"github.com/prometheus/prometheus/util/httputil"
Expand Down Expand Up @@ -94,6 +95,10 @@ type Discovery struct {

// NewDiscovery returns a new Discovery for the given config.
func NewDiscovery(conf *config.ConsulSDConfig, logger log.Logger) (*Discovery, error) {
if logger == nil {
logger = log.NewNopLogger()
}

tls, err := httputil.NewTLSConfig(conf.TLSConfig)
if err != nil {
return nil, err
Expand Down Expand Up @@ -165,7 +170,7 @@ func (d *Discovery) Run(ctx context.Context, ch chan<- []*config.TargetGroup) {
}

if err != nil {
d.logger.Errorf("Error refreshing service list: %s", err)
level.Error(d.logger).Log("msg", "Error refreshing service list", "err", err)
rpcFailuresCount.Inc()
time.Sleep(retryInterval)
continue
Expand All @@ -181,7 +186,7 @@ func (d *Discovery) Run(ctx context.Context, ch chan<- []*config.TargetGroup) {
if d.clientDatacenter == "" {
info, err := d.client.Agent().Self()
if err != nil {
d.logger.Errorf("Error retrieving datacenter name: %s", err)
level.Error(d.logger).Log("msg", "Error retrieving datacenter name", "err", err)
time.Sleep(retryInterval)
continue
}
Expand Down Expand Up @@ -262,7 +267,7 @@ func (srv *consulService) watch(ctx context.Context, ch chan<- []*config.TargetG
}

if err != nil {
srv.logger.Errorf("Error refreshing service %s: %s", srv.name, err)
level.Error(srv.logger).Log("msg", "Error refreshing service", "service", srv.name, "err", err)
rpcFailuresCount.Inc()
time.Sleep(retryInterval)
continue
Expand Down
5 changes: 2 additions & 3 deletions discovery/consul/consul_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -16,14 +16,13 @@ package consul
import (
"testing"

"github.com/prometheus/common/log"
"github.com/prometheus/prometheus/config"
)

func TestConfiguredService(t *testing.T) {
conf := &config.ConsulSDConfig{
Services: []string{"configuredServiceName"}}
consulDiscovery, err := NewDiscovery(conf, log.Base())
consulDiscovery, err := NewDiscovery(conf, nil)

if err != nil {
t.Errorf("Unexpected error when initialising discovery %v", err)
Expand All @@ -38,7 +37,7 @@ func TestConfiguredService(t *testing.T) {

func TestNonConfiguredService(t *testing.T) {
conf := &config.ConsulSDConfig{}
consulDiscovery, err := NewDiscovery(conf, log.Base())
consulDiscovery, err := NewDiscovery(conf, nil)

if err != nil {
t.Errorf("Unexpected error when initialising discovery %v", err)
Expand Down