Skip to content

Commit

Permalink
[Fix rubocop#7] Rename NodePattern,ProcessedSource & Token => AST::*
Browse files Browse the repository at this point in the history
  • Loading branch information
marcandre committed May 15, 2020
1 parent 5286705 commit 86f0b17
Show file tree
Hide file tree
Showing 11 changed files with 1,028 additions and 1,019 deletions.
3 changes: 3 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,9 @@

### Changes

* Classes `NodePattern`, `ProcessedSource` and `Token` moved to `AST::NodePattern`, etc.
The `rubocop` gem has aliases to insure compatibility. [#7]

* `AST::ProcessedSource.from_file` now raises a `Errno::ENOENT` instead of a `RuboCop::Error` [#7]

## 0.0.2 (2020-05-12)
Expand Down
4 changes: 2 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@

Contains the classes needed by [RuboCop](https://github.com/rubocop-hq/rubocop) to deal with Ruby's AST, in particular:
* `RuboCop::AST::Node`
* `RuboCop::NodePattern` ([doc](manual/node_pattern.md))
* `RuboCop::AST::NodePattern` ([doc](manual/node_pattern.md))

This gem may be used independently from the main RuboCop gem.

Expand All @@ -25,7 +25,7 @@ gem 'rubocop-ast'

## Usage

Refer to the documentation of `RuboCop::AST::Node` and [`RuboCop::NodePattern`](manual/node_pattern.md)
Refer to the documentation of `RuboCop::AST::Node` and [`RuboCop::AST::NodePattern`](manual/node_pattern.md)

## Contributing

Expand Down
1,492 changes: 747 additions & 745 deletions lib/rubocop/ast/node_pattern.rb

Large diffs are not rendered by default.

310 changes: 156 additions & 154 deletions lib/rubocop/ast/processed_source.rb
Original file line number Diff line number Diff line change
Expand Up @@ -3,197 +3,199 @@
require 'digest/sha1'

module RuboCop
# ProcessedSource contains objects which are generated by Parser
# and other information such as disabled lines for cops.
# It also provides a convenient way to access source lines.
class ProcessedSource
STRING_SOURCE_NAME = '(string)'

attr_reader :path, :buffer, :ast, :comments, :tokens, :diagnostics,
:parser_error, :raw_source, :ruby_version

def self.from_file(path, ruby_version)
file = File.read(path, mode: 'rb')
new(file, ruby_version, path)
end
module AST
# ProcessedSource contains objects which are generated by Parser
# and other information such as disabled lines for cops.
# It also provides a convenient way to access source lines.
class ProcessedSource
STRING_SOURCE_NAME = '(string)'

attr_reader :path, :buffer, :ast, :comments, :tokens, :diagnostics,
:parser_error, :raw_source, :ruby_version

def self.from_file(path, ruby_version)
file = File.read(path, mode: 'rb')
new(file, ruby_version, path)
end

def initialize(source, ruby_version, path = nil)
# Defaults source encoding to UTF-8, regardless of the encoding it has
# been read with, which could be non-utf8 depending on the default
# external encoding.
source.force_encoding(Encoding::UTF_8) unless source.encoding == Encoding::UTF_8
def initialize(source, ruby_version, path = nil)
# Defaults source encoding to UTF-8, regardless of the encoding it has
# been read with, which could be non-utf8 depending on the default
# external encoding.
source.force_encoding(Encoding::UTF_8) unless source.encoding == Encoding::UTF_8

@raw_source = source
@path = path
@diagnostics = []
@ruby_version = ruby_version
@parser_error = nil
@raw_source = source
@path = path
@diagnostics = []
@ruby_version = ruby_version
@parser_error = nil

parse(source, ruby_version)
end
parse(source, ruby_version)
end

def ast_with_comments
return if !ast || !comments
def ast_with_comments
return if !ast || !comments

@ast_with_comments ||= Parser::Source::Comment.associate(ast, comments)
end
@ast_with_comments ||= Parser::Source::Comment.associate(ast, comments)
end

# Returns the source lines, line break characters removed, excluding a
# possible __END__ and everything that comes after.
def lines
@lines ||= begin
all_lines = @buffer.source_lines
last_token_line = tokens.any? ? tokens.last.line : all_lines.size
result = []
all_lines.each_with_index do |line, ix|
break if ix >= last_token_line && line == '__END__'

result << line
# Returns the source lines, line break characters removed, excluding a
# possible __END__ and everything that comes after.
def lines
@lines ||= begin
all_lines = @buffer.source_lines
last_token_line = tokens.any? ? tokens.last.line : all_lines.size
result = []
all_lines.each_with_index do |line, ix|
break if ix >= last_token_line && line == '__END__'

result << line
end
result
end
result
end
end

def [](*args)
lines[*args]
end

def valid_syntax?
return false if @parser_error

@diagnostics.none? { |d| %i[error fatal].include?(d.level) }
end
def [](*args)
lines[*args]
end

# Raw source checksum for tracking infinite loops.
def checksum
Digest::SHA1.hexdigest(@raw_source)
end
def valid_syntax?
return false if @parser_error

def each_comment
comments.each { |comment| yield comment }
end
@diagnostics.none? { |d| %i[error fatal].include?(d.level) }
end

def find_comment
comments.find { |comment| yield comment }
end
# Raw source checksum for tracking infinite loops.
def checksum
Digest::SHA1.hexdigest(@raw_source)
end

def each_token
tokens.each { |token| yield token }
end
def each_comment
comments.each { |comment| yield comment }
end

def find_token
tokens.find { |token| yield token }
end
def find_comment
comments.find { |comment| yield comment }
end

def file_path
buffer.name
end
def each_token
tokens.each { |token| yield token }
end

def blank?
ast.nil?
end
def find_token
tokens.find { |token| yield token }
end

def commented?(source_range)
comment_lines.include?(source_range.line)
end
def file_path
buffer.name
end

def comments_before_line(line)
comments.select { |c| c.location.line <= line }
end
def blank?
ast.nil?
end

def start_with?(string)
return false if self[0].nil?
def commented?(source_range)
comment_lines.include?(source_range.line)
end

self[0].start_with?(string)
end
def comments_before_line(line)
comments.select { |c| c.location.line <= line }
end

def preceding_line(token)
lines[token.line - 2]
end
def start_with?(string)
return false if self[0].nil?

def current_line(token)
lines[token.line - 1]
end
self[0].start_with?(string)
end

def following_line(token)
lines[token.line]
end
def preceding_line(token)
lines[token.line - 2]
end

def line_indentation(line_number)
lines[line_number - 1]
.match(/^(\s*)/)[1]
.to_s
.length
end
def current_line(token)
lines[token.line - 1]
end

private
def following_line(token)
lines[token.line]
end

def comment_lines
@comment_lines ||= comments.map { |c| c.location.line }
end
def line_indentation(line_number)
lines[line_number - 1]
.match(/^(\s*)/)[1]
.to_s
.length
end

def parse(source, ruby_version)
buffer_name = @path || STRING_SOURCE_NAME
@buffer = Parser::Source::Buffer.new(buffer_name, 1)
private

begin
@buffer.source = source
rescue EncodingError => e
@parser_error = e
return
def comment_lines
@comment_lines ||= comments.map { |c| c.location.line }
end

@ast, @comments, @tokens = tokenize(create_parser(ruby_version))
end
def parse(source, ruby_version)
buffer_name = @path || STRING_SOURCE_NAME
@buffer = Parser::Source::Buffer.new(buffer_name, 1)

def tokenize(parser)
begin
ast, comments, tokens = parser.tokenize(@buffer)
begin
@buffer.source = source
rescue EncodingError => e
@parser_error = e
return
end

ast.respond_to?(:complete!) && ast.complete!
rescue Parser::SyntaxError
# All errors are in diagnostics. No need to handle exception.
@ast, @comments, @tokens = tokenize(create_parser(ruby_version))
end

tokens = tokens.map { |t| Token.from_parser_token(t) } if tokens
def tokenize(parser)
begin
ast, comments, tokens = parser.tokenize(@buffer)

[ast, comments, tokens]
end
ast.respond_to?(:complete!) && ast.complete!
rescue Parser::SyntaxError
# All errors are in diagnostics. No need to handle exception.
end

tokens = tokens.map { |t| Token.from_parser_token(t) } if tokens

# rubocop:disable Metrics/MethodLength
def parser_class(ruby_version)
case ruby_version
when 2.4
require 'parser/ruby24'
Parser::Ruby24
when 2.5
require 'parser/ruby25'
Parser::Ruby25
when 2.6
require 'parser/ruby26'
Parser::Ruby26
when 2.7
require 'parser/ruby27'
Parser::Ruby27
else
raise ArgumentError,
"RuboCop found unknown Ruby version: #{ruby_version.inspect}"
[ast, comments, tokens]
end
end
# rubocop:enable Metrics/MethodLength

def create_parser(ruby_version)
builder = RuboCop::AST::Builder.new

parser_class(ruby_version).new(builder).tap do |parser|
# On JRuby there's a risk that we hang in tokenize() if we
# don't set the all errors as fatal flag. The problem is caused by a bug
# in Racc that is discussed in issue #93 of the whitequark/parser
# project on GitHub.
parser.diagnostics.all_errors_are_fatal = (RUBY_ENGINE != 'ruby')
parser.diagnostics.ignore_warnings = false
parser.diagnostics.consumer = lambda do |diagnostic|
@diagnostics << diagnostic

# rubocop:disable Metrics/MethodLength
def parser_class(ruby_version)
case ruby_version
when 2.4
require 'parser/ruby24'
Parser::Ruby24
when 2.5
require 'parser/ruby25'
Parser::Ruby25
when 2.6
require 'parser/ruby26'
Parser::Ruby26
when 2.7
require 'parser/ruby27'
Parser::Ruby27
else
raise ArgumentError,
"RuboCop found unknown Ruby version: #{ruby_version.inspect}"
end
end
# rubocop:enable Metrics/MethodLength

def create_parser(ruby_version)
builder = RuboCop::AST::Builder.new

parser_class(ruby_version).new(builder).tap do |parser|
# On JRuby there's a risk that we hang in tokenize() if we
# don't set the all errors as fatal flag. The problem is caused by a bug
# in Racc that is discussed in issue #93 of the whitequark/parser
# project on GitHub.
parser.diagnostics.all_errors_are_fatal = (RUBY_ENGINE != 'ruby')
parser.diagnostics.ignore_warnings = false
parser.diagnostics.consumer = lambda do |diagnostic|
@diagnostics << diagnostic
end
end
end
end
Expand Down
Loading

0 comments on commit 86f0b17

Please sign in to comment.