Skip to content

Commit

Permalink
Implement a new analysis runner and improve U1000
Browse files Browse the repository at this point in the history
This commit completely replaces the analysis runner of Staticcheck. It
fixes several performance shortcomings, as well as subtle bugs in U1000.

To explain the behaviors of the old and new runners, assume that we're
processing a package graph that looks like this:

	  A
	 ↙ ↘
	B   C
	↓
	⋮
	↓
	X

Package A is the package we wish to check. Packages B and C are direct
dependencies of A, and X is an indirect dependency of B, with
potentially many packages between B and X

In the old runner, we would process the graph in a single DFS pass. We
would start processing A, see that it needed B and C, start loading B
and C, and so forth. This approach would unnecessarily increase memory
usage. Package C would be held in memory, ready to be used by A, while
the long chain from X to B was being processed. Furthermore, A may not
need most of C's data in the first place, if A was already fully
cached. Furthermore, processing the graph top to bottom is harder to
parallelize efficiently.

The new runner, in contrast, first materializes the graph (the
planning phase) and then executes it from the bottom up (the execution
phase). Whenever a leaf node finishes execution, its data would be
cached on disk, then unloaded from memory. The only data that will be
kept in memory is the package's hash, so that its dependents can
compute their own hashes.

Next, all dependents that are ready to run (i.e. that have no more
unprocessed leaf nodes) will be executed. If the dependent decides
that it needs information of its dependencies, it loads them from disk
again. This approach drastically reduces peak memory usage, at a
slight increase in CPU usage because of repeated loading of data.
However, knowing the full graph allows for more efficient
parallelization, offsetting the increased CPU cost. It also favours
the common case, where most packages will have up to date cached data.

Changes to unused

The 'unused' check (U1000 and U1001) has always been the odd one out.
It is the only check that propagates information backwards in the
import graph – that is, the sum of importees determines which objects
in a package are considered used. Due to tests and test variants, this
applies even when not operating in whole-program mode.

The way we implemented this was not only expensive – whole-program
mode in particular needed to retain type information for all packages
– it was also subtly wrong. Because we cached all diagnostics of a
package, we cached stale 'unused' diagnostics when an importee
changed.

As part of writing the new analysis runner, we make several changes to
'unused' that make sure it behaves well and doesn't negate the
performance improvements of the new runner.

The most obvious change is the removal of whole-program mode. The
combination of correct caching and efficient cache usage means that we
no longer have access to the information required to compute a
whole-program solution. It never worked quite right, anyway, being
unaware of reflection, and having to grossly over-estimate the set of
used methods due to interfaces.

The normal mode of 'unused' now considers all exported package-level
identifiers as used, even if they are declared within tests or package
main. Treating exported functions in package main unused has been
wrong ever since the addition of the 'plugin' build mode. Doing so in
tests may have been mostly correct (ignoring reflection), but
continuing to do so would complicate the implementation for little
gain.

In the new implementation, the per-package information that is cached
for U1000 consists of two lists: the list of used objects and the list
of unused objects. At the end of analysis, the lists of all packages
get merged: if any package uses an object, it is considered used.
Otherwise, if any package didn't use an object, it is considered
unused.

This list-based approach is only correct if the usedness of an
exported object in one package doesn't depend on another package.

Consider the following package layout:

	foo.go:
	package pkg

	func unexported() {}

	export_test.go
	package pkg

	func Exported() { unexported() }

	external_test.go
	package pkg_test

	import "pkg"

	var _ = pkg.Exported

This layout has three packages: pkg, pkg [test] and pkg_test. Under
unused's old logic, pkg_test would be responsible for marking pkg
[test]'s Exported as used. This would transitively mark 'unexported'
as used, too. However, with our list-based approach, we would get the
following lists:

pkg:
  used:
  unused: unexported

pkg [test]:
  used:
  unused: unexported, Exported

pkg_test:
  used: Exported
  unused:

Merging these lists, we would never know that 'unexported' was used.
Instead of using these lists, we would need to cache and resolve full
graphs.

This problem does not exist for unexported objects. If a package is
able to use an unexported object, it must exist within the same
package, which means it can internally resolve the package's graph
before generating the lists.

For completeness, these are the correct lists:

pkg:
  used:
  unused: unexported

pkg [test]:
  used: Exported, unexported
  unused:

pkg_test:
  used: Exported
  unused:

(The inclusion of Exported in pkg_test is superfluous and may be
optimized away at some point.)

As part of porting unused's tests, we discovered a flaky false
negative, caused by an incorrect implementation of our version of
types.Identical. We were still using types.Identical under the hood,
which wouldn't correctly account for nested types. This has been
fixed.

More changes to unused

Several planned improvements to 'unused' also made it easier to
integrate with the new runner, which is why these changes are part of
this commit.

TODO

Closes gh-233
Closes gh-284
Closes gh-476
Closes gh-538
Closes gh-576
Closes gh-671
Closes gh-675
Closes gh-690
Closes gh-691
  • Loading branch information
dominikh committed May 8, 2020
1 parent 009a146 commit 6206f46
Show file tree
Hide file tree
Showing 72 changed files with 3,724 additions and 2,958 deletions.
2 changes: 1 addition & 1 deletion .github/workflows/ci.yml
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ jobs:
strategy:
matrix:
os: ["windows-latest", "ubuntu-latest", "macOS-latest"]
go: ["1.12.x", "1.13.x"]
go: ["1.13.x", "1.14.x"]
runs-on: ${{ matrix.os }}
steps:
- uses: actions/checkout@v1
Expand Down
10 changes: 4 additions & 6 deletions cmd/staticcheck/staticcheck.go
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,6 @@ import (
"os"

"golang.org/x/tools/go/analysis"
"honnef.co/go/tools/lint"
"honnef.co/go/tools/lint/lintutil"
"honnef.co/go/tools/simple"
"honnef.co/go/tools/staticcheck"
Expand All @@ -16,7 +15,6 @@ import (

func main() {
fs := lintutil.FlagSet("staticcheck")
wholeProgram := fs.Bool("unused.whole-program", false, "Run unused in whole program mode")
debug := fs.String("debug.unused-graph", "", "Write unused's object graph to `file`")
fs.Parse(os.Args[1:])

Expand All @@ -31,14 +29,14 @@ func main() {
cs = append(cs, v)
}

u := unused.NewChecker(*wholeProgram)
cs = append(cs, unused.Analyzer)
if *debug != "" {
f, err := os.OpenFile(*debug, os.O_WRONLY|os.O_CREATE|os.O_TRUNC, 0666)
if err != nil {
log.Fatal(err)
}
u.Debug = f
unused.Debug = f
}
cums := []lint.CumulativeChecker{u}
lintutil.ProcessFlagSet(cs, cums, fs)

lintutil.ProcessFlagSet(cs, fs)
}
62 changes: 57 additions & 5 deletions code/code.go
Original file line number Diff line number Diff line change
Expand Up @@ -2,13 +2,15 @@
package code

import (
"bytes"
"flag"
"fmt"
"go/ast"
"go/constant"
"go/token"
"go/types"
"strings"
"sync"

"golang.org/x/tools/go/analysis"
"golang.org/x/tools/go/analysis/passes/inspect"
Expand All @@ -17,9 +19,55 @@ import (
"honnef.co/go/tools/facts"
"honnef.co/go/tools/go/types/typeutil"
"honnef.co/go/tools/ir"
"honnef.co/go/tools/lint"
)

var bufferPool = &sync.Pool{
New: func() interface{} {
buf := bytes.NewBuffer(nil)
buf.Grow(64)
return buf
},
}

func FuncName(f *types.Func) string {
buf := bufferPool.Get().(*bytes.Buffer)
buf.Reset()
if f.Type() != nil {
sig := f.Type().(*types.Signature)
if recv := sig.Recv(); recv != nil {
buf.WriteByte('(')
if _, ok := recv.Type().(*types.Interface); ok {
// gcimporter creates abstract methods of
// named interfaces using the interface type
// (not the named type) as the receiver.
// Don't print it in full.
buf.WriteString("interface")
} else {
types.WriteType(buf, recv.Type(), nil)
}
buf.WriteByte(')')
buf.WriteByte('.')
} else if f.Pkg() != nil {
writePackage(buf, f.Pkg())
}
}
buf.WriteString(f.Name())
s := buf.String()
bufferPool.Put(buf)
return s
}

func writePackage(buf *bytes.Buffer, pkg *types.Package) {
if pkg == nil {
return
}
s := pkg.Path()
if s != "" {
buf.WriteString(s)
buf.WriteByte('.')
}
}

type Positioner interface {
Pos() token.Pos
}
Expand All @@ -34,7 +82,7 @@ func CallName(call *ir.CallCommon) string {
if !ok {
return ""
}
return lint.FuncName(fn)
return FuncName(fn)
case *ir.Builtin:
return v.Name()
}
Expand Down Expand Up @@ -244,12 +292,12 @@ func CallNameAST(pass *analysis.Pass, call *ast.CallExpr) string {
if !ok {
return ""
}
return lint.FuncName(fn)
return FuncName(fn)
case *ast.Ident:
obj := pass.TypesInfo.ObjectOf(fun)
switch obj := obj.(type) {
case *types.Func:
return lint.FuncName(obj)
return FuncName(obj)
case *types.Builtin:
return obj.Name()
default:
Expand Down Expand Up @@ -472,7 +520,11 @@ func MayHaveSideEffects(pass *analysis.Pass, expr ast.Expr, purity facts.PurityR
}

func IsGoVersion(pass *analysis.Pass, minor int) bool {
version := pass.Analyzer.Flags.Lookup("go").Value.(flag.Getter).Get().(int)
f, ok := pass.Analyzer.Flags.Lookup("go").Value.(flag.Getter)
if !ok {
panic("requested Go version, but analyzer has no version flag")
}
version := f.Get().(int)
return version >= minor
}

Expand Down
107 changes: 107 additions & 0 deletions facts/directives.go
Original file line number Diff line number Diff line change
@@ -0,0 +1,107 @@
package facts

import (
"go/ast"
"go/token"
"path/filepath"
"reflect"
"strings"

"golang.org/x/tools/go/analysis"
)

// A directive is a comment of the form '//lint:<command>
// [arguments...]'. It represents instructions to the static analysis
// tool.
type Directive struct {
Command string
Arguments []string
Directive *ast.Comment
Node ast.Node
}

type SerializedDirective struct {
Command string
Arguments []string
// The position of the comment
DirectivePosition token.Position
// The position of the node that the comment is attached to
NodePosition token.Position
}

func parseDirective(s string) (cmd string, args []string) {
if !strings.HasPrefix(s, "//lint:") {
return "", nil
}
s = strings.TrimPrefix(s, "//lint:")
fields := strings.Split(s, " ")
return fields[0], fields[1:]
}

func directives(pass *analysis.Pass) (interface{}, error) {
return ParseDirectives(pass.Files, pass.Fset), nil
}

func ParseDirectives(files []*ast.File, fset *token.FileSet) []Directive {
var dirs []Directive
for _, f := range files {
// OPT(dh): in our old code, we skip all the commentmap work if we
// couldn't find any directives, benchmark if that's actually
// worth doing
cm := ast.NewCommentMap(fset, f, f.Comments)
for node, cgs := range cm {
for _, cg := range cgs {
for _, c := range cg.List {
if !strings.HasPrefix(c.Text, "//lint:") {
continue
}
cmd, args := parseDirective(c.Text)
d := Directive{
Command: cmd,
Arguments: args,
Directive: c,
Node: node,
}
dirs = append(dirs, d)
}
}
}
}
return dirs
}

// duplicated from report.DisplayPosition to break import cycle
func displayPosition(fset *token.FileSet, p token.Pos) token.Position {
if p == token.NoPos {
return token.Position{}
}

// Only use the adjusted position if it points to another Go file.
// This means we'll point to the original file for cgo files, but
// we won't point to a YACC grammar file.
pos := fset.PositionFor(p, false)
adjPos := fset.PositionFor(p, true)

if filepath.Ext(adjPos.Filename) == ".go" {
return adjPos
}

return pos
}

var Directives = &analysis.Analyzer{
Name: "directives",
Doc: "extracts linter directives",
Run: directives,
RunDespiteErrors: true,
ResultType: reflect.TypeOf([]Directive{}),
}

func SerializeDirective(dir Directive, fset *token.FileSet) SerializedDirective {
return SerializedDirective{
Command: dir.Command,
Arguments: dir.Arguments,
DirectivePosition: displayPosition(fset, dir.Directive.Pos()),
NodePosition: displayPosition(fset, dir.Node.Pos()),
}
}

0 comments on commit 6206f46

Please sign in to comment.