Permalink
Browse files

Various RDoc documentation improvements

  • Loading branch information...
1 parent 4c9fe06 commit be09262744bc5d51dff8c07ff29b4b4210f47019 @jeremyevans committed Jul 18, 2011
View
@@ -573,7 +573,7 @@ You can execute custom code when creating, updating, or deleting records by defi
Note the use of +super+ if you define your own hook methods. Almost all <tt>Sequel::Model</tt> class and instance methods (not just hook methods) can be overridden safely, but you have to make sure to call +super+ when doing so, otherwise you risk breaking things.
-For the example above, you should probably use a database trigger if you can. Hooks can be used for data integrity, but they will only enforce that integrity when you are modifying the database through model instances. If you plan on allowing any other access to the database, it's best to use database triggers for data integrity.
+For the example above, you should probably use a database trigger if you can. Hooks can be used for data integrity, but they will only enforce that integrity when you are modifying the database through model instances. If you plan on allowing any other access to the database, it's best to use database triggers and constraints for data integrity.
=== Deleting records
@@ -681,9 +681,9 @@ Associations can be eagerly loaded via +eager+ and the <tt>:eager</tt> associati
# Loads all people, their posts, their posts' tags, replies to those posts,
# the person for each reply, the tag for each reply, and all posts and
# replies that have that tag. Uses a total of 8 queries.
- Person.eager(:posts=>{:replies=>[:person, {:tags=>{:posts, :replies}}]}).all
+ Person.eager(:posts=>{:replies=>[:person, {:tags=>[:posts, :replies]}]}).all
-In addition to using +eager+, you can also use +eager_graph+, which will use a single query to get the object and all associated objects. This may be necessary if you want to filter or order the result set based on columns in associated tables. It works with cascading as well, the syntax is exactly the same. Note that using eager_graph to eagerly load multiple *_to_many associations will cause the result set to be a cartesian product, so you should be very careful with your filters when using it in that case.
+In addition to using +eager+, you can also use +eager_graph+, which will use a single query to get the object and all associated objects. This may be necessary if you want to filter or order the result set based on columns in associated tables. It works with cascading as well, the API is very similar. Note that using +eager_graph+ to eagerly load multiple <tt>*_to_many</tt> associations will cause the result set to be a cartesian product, so you should be very careful with your filters when using it in that case.
You can dynamically customize the eagerly loaded dataset by using using a proc. This proc is passed the dataset used for eager loading, and should return a modified copy of that dataset:
View
@@ -230,7 +230,7 @@ def self.string_to_datetime(string)
# Converts the given +string+ into a +Time+ object.
#
- # Sequel.string_to_datetime('10:20:30') # Time.parse('10:20:30')
+ # Sequel.string_to_time('10:20:30') # Time.parse('10:20:30')
def self.string_to_time(string)
begin
Time.parse(string)
@@ -155,8 +155,8 @@ def database_type
# Disconnects all available connections from the connection pool. Any
# connections currently in use will not be disconnected. Options:
- # * :servers - Should be a symbol specifing the server to disconnect from,
- # or an array of symbols to specify multiple servers.
+ # :servers :: Should be a symbol specifing the server to disconnect from,
+ # or an array of symbols to specify multiple servers.
#
# Example:
#
@@ -7,7 +7,7 @@ class Database
# Converts a uri to an options hash. These options are then passed
# to a newly created database object.
- def self.uri_to_options(uri) # :nodoc:
+ def self.uri_to_options(uri)
{ :user => uri.user,
:password => uri.password,
:host => uri.host,
@@ -255,14 +255,17 @@ def typecast_value_float(value)
Float(value)
end
- LEADING_ZERO_RE = /\A0+(\d)/.freeze # :nodoc:
+ # Used for checking/removing leading zeroes from strings so they don't get
+ # interpreted as octal.
+ LEADING_ZERO_RE = /\A0+(\d)/.freeze
if RUBY_VERSION >= '1.9'
# Typecast the value to an Integer
def typecast_value_integer(value)
(value.is_a?(String) && value =~ LEADING_ZERO_RE) ? Integer(value, 10) : Integer(value)
end
else
- LEADING_ZERO_REP = "\\1".freeze # :nodoc:
+ # Replacement string when replacing leading zeroes.
+ LEADING_ZERO_REP = "\\1".freeze
# Typecast the value to an Integer
def typecast_value_integer(value)
Integer(value.is_a?(String) ? value.sub(LEADING_ZERO_RE, LEADING_ZERO_REP) : value)
@@ -82,12 +82,12 @@ def execute_insert(sql, opts={}, &block)
#
# DB.get(1) # SELECT 1
# # => 1
- # DB.get{version{}} # SELECT server_version()
+ # DB.get{server_version{}} # SELECT server_version()
def get(*args, &block)
dataset.get(*args, &block)
end
- # Return a hash containing index information. Hash keys are index name symbols.
+ # Return a hash containing index information for the table. Hash keys are index name symbols.
# Values are subhashes with two keys, :columns and :unique. The value of :columns
# is an array of symbols of column names. The value of :unique is true or false
# depending on if the index is unique.
@@ -110,7 +110,6 @@ def run(sql, opts={})
nil
end
- # Parse the schema from the database.
# Returns the schema for the given table as an array with all members being arrays of length 2,
# the first member being the column name, and the second member being a hash of column information.
# Available options are:
@@ -129,7 +128,8 @@ def run(sql, opts={})
# it means that primary key information is unavailable, not that the column
# is not a primary key.
# :ruby_default :: The database default for the column, as a ruby object. In many cases, complex
- # database defaults cannot be parsed into ruby objects.
+ # database defaults cannot be parsed into ruby objects, in which case nil will be
+ # used as the value.
# :type :: A symbol specifying the type, such as :integer or :string.
#
# Example:
@@ -169,6 +169,7 @@ def schema(table, opts={})
# to the database.
#
# DB.table_exists?(:foo) # => false
+ # # SELECT * FROM foo LIMIT 1
def table_exists?(name)
sch, table_name = schema_and_table(name)
name = SQL::QualifiedIdentifier.new(sch, table_name) if sch
@@ -29,7 +29,7 @@ class Database
# DB.add_column :items, :name, :text, :unique => true, :null => false
# DB.add_column :items, :category, :text, :default => 'ruby'
#
- # See alter_table.
+ # See <tt>alter_table</tt>.
def add_column(table, *args)
alter_table(table) {add_column(*args)}
end
@@ -40,9 +40,9 @@ def add_column(table, *args)
# DB.add_index :posts, [:author, :title], :unique => true
#
# Options:
- # * :ignore_errors - Ignore any DatabaseErrors that are raised
+ # :ignore_errors :: Ignore any DatabaseErrors that are raised
#
- # See alter_table.
+ # See <tt>alter_table</tt>.
def add_index(table, columns, options={})
e = options[:ignore_errors]
begin
@@ -65,10 +65,10 @@ def add_index(table, columns, options={})
# end
#
# Note that +add_column+ accepts all the options available for column
- # definitions using create_table, and +add_index+ accepts all the options
+ # definitions using <tt>create_table</tt>, and +add_index+ accepts all the options
# available for index definition.
#
- # See Schema::AlterTableGenerator and the {"Migrations and Schema Modification" guide}[link:files/doc/migration_rdoc.html].
+ # See <tt>Schema::AlterTableGenerator</tt> and the {"Migrations and Schema Modification" guide}[link:files/doc/migration_rdoc.html].
def alter_table(name, generator=nil, &block)
generator ||= Schema::AlterTableGenerator.new(self, &block)
alter_table_sql_list(name, generator.operations).flatten.each {|sql| execute_ddl(sql)}
@@ -89,7 +89,7 @@ def alter_table(name, generator=nil, &block)
# :temp :: Create the table as a temporary table.
# :ignore_index_errors :: Ignore any errors when creating indexes.
#
- # See Schema::Generator and the {"Migrations and Schema Modification" guide}[link:files/doc/migration_rdoc.html].
+ # See <tt>Schema::Generator</tt> and the {"Migrations and Schema Modification" guide}[link:files/doc/migration_rdoc.html].
def create_table(name, options={}, &block)
remove_cached_schema(name)
options = {:generator=>options} if options.is_a?(Schema::Generator)
@@ -147,7 +147,7 @@ def create_view(name, source)
#
# DB.drop_column :items, :category
#
- # See alter_table.
+ # See <tt>alter_table</tt>.
def drop_column(table, *args)
alter_table(table) {drop_column(*args)}
end
@@ -157,7 +157,7 @@ def drop_column(table, *args)
# DB.drop_index :posts, :title
# DB.drop_index :posts, [:author, :title]
#
- # See alter_table.
+ # See <tt>alter_table</tt>.
def drop_index(table, columns, options={})
alter_table(table){drop_index(columns, options)}
end
@@ -206,7 +206,7 @@ def rename_table(name, new_name)
#
# DB.rename_column :items, :cntr, :counter
#
- # See alter_table.
+ # See <tt>alter_table</tt>.
def rename_column(table, *args)
alter_table(table) {rename_column(*args)}
end
@@ -215,7 +215,7 @@ def rename_column(table, *args)
#
# DB.set_column_default :items, :category, 'perl!'
#
- # See alter_table.
+ # See <tt>alter_table</tt>.
def set_column_default(table, *args)
alter_table(table) {set_column_default(*args)}
end
@@ -224,7 +224,7 @@ def set_column_default(table, *args)
#
# DB.set_column_type :items, :price, :float
#
- # See alter_table.
+ # See <tt>alter_table</tt>.
def set_column_type(table, *args)
alter_table(table) {set_column_type(*args)}
end
@@ -248,14 +248,16 @@ def import(columns, values, opts={})
# the value of the primary key for the inserted row, but that is adapter dependent.
#
# +insert+ handles a number of different argument formats:
- # * No arguments, single empty hash - Uses DEFAULT VALUES
- # * Single hash - Most common format, treats keys as columns an values as values
- # * Single array - Treats entries as values, with no columns
- # * Two arrays - Treats first array as columns, second array as values
- # * Single Dataset - Treats as an insert based on a selection from the dataset given,
- # with no columns
- # * Array and dataset - Treats as an insert based on a selection from the dataset
- # given, with the columns given by the array.
+ # no arguments or single empty hash :: Uses DEFAULT VALUES
+ # single hash :: Most common format, treats keys as columns an values as values
+ # single array :: Treats entries as values, with no columns
+ # two arrays :: Treats first array as columns, second array as values
+ # single Dataset :: Treats as an insert based on a selection from the dataset given,
+ # with no columns
+ # array and dataset :: Treats as an insert based on a selection from the dataset
+ # given, with the columns given by the array.
+ #
+ # Examples:
#
# DB[:items].insert
# # INSERT INTO items DEFAULT VALUES
@@ -310,7 +312,7 @@ def interval(column)
aggregate_dataset.get{max(column) - min(column)}
end
- # Reverses the order and then runs first. Note that this
+ # Reverses the order and then runs #first with the given arguments and block. Note that this
# will not necessarily give you the last record in the dataset,
# unless you have an unambiguous order. If there is not
# currently an order for this dataset, raises an +Error+.
@@ -400,13 +402,14 @@ def select_hash(key_column, value_column)
# Selects the column given (either as an argument or as a block), and
# returns an array of all values of that column in the dataset. If you
# give a block argument that returns an array with multiple entries,
- # the contents of the resulting array are undefined.
+ # the contents of the resulting array are undefined. Raises an Error
+ # if called with both an argument and a block.
#
# DB[:table].select_map(:id) # SELECT id FROM table
# # => [3, 5, 8, 1, ...]
#
- # DB[:table].select_map{abs(id)} # SELECT abs(id) FROM table
- # # => [3, 5, 8, 1, ...]
+ # DB[:table].select_map{id * 2} # SELECT (id * 2) FROM table
+ # # => [6, 10, 16, 2, ...]
def select_map(column=nil, &block)
ds = naked.ungraphed
ds = if column
@@ -423,8 +426,8 @@ def select_map(column=nil, &block)
# DB[:table].select_order_map(:id) # SELECT id FROM table ORDER BY id
# # => [1, 2, 3, 4, ...]
#
- # DB[:table].select_order_map{abs(id)} # SELECT abs(id) FROM table ORDER BY abs(id)
- # # => [1, 2, 3, 4, ...]
+ # DB[:table].select_order_map{abs(id)} # SELECT (id * 2) FROM table ORDER BY (id * 2)
+ # # => [2, 4, 6, 8, ...]
def select_order_map(column=nil, &block)
ds = naked.ungraphed
ds = if column
@@ -522,7 +525,7 @@ def truncate
# DB[:table].update(:x=>nil) # UPDATE table SET x = NULL
# # => 10
#
- # DB[:table].update(:x=>:x+1, :y=>0) # UPDATE table SET x = (x + 1), :y = 0
+ # DB[:table].update(:x=>:x+1, :y=>0) # UPDATE table SET x = (x + 1), y = 0
# # => 10
def update(values={})
execute_dui(update_sql(values))
@@ -23,7 +23,7 @@ class Dataset
# DB[:posts]
#
# Sequel::Dataset is an abstract class that is not useful by itself. Each
- # database adaptor provides a subclass of Sequel::Dataset, and has
+ # database adapter provides a subclass of Sequel::Dataset, and has
# the Database#dataset method return an instance of that subclass.
def initialize(db, opts = nil)
@db = db
@@ -83,10 +83,10 @@ def first_source_alias
# have a table, raises an error. If the table is aliased, returns the original
# table, not the alias
#
- # DB[:table].first_source_alias
+ # DB[:table].first_source_table
# # => :table
#
- # DB[:table___t].first_source_alias
+ # DB[:table___t].first_source_table
# # => :table
def first_source_table
source = @opts[:from]
@@ -199,11 +199,9 @@ def bind(bind_vars={})
clone(:bind_vars=>@opts[:bind_vars] ? @opts[:bind_vars].merge(bind_vars) : bind_vars)
end
- # For the given type (:select, :insert, :update, or :delete),
- # run the sql with the bind variables
- # specified in the hash. values is a hash of passed to
- # insert or update (if one of those types is used),
- # which may contain placeholders.
+ # For the given type (:select, :first, :insert, :insert_select, :update, or :delete),
+ # run the sql with the bind variables specified in the hash. +values+ is a hash passed to
+ # insert or update (if one of those types is used), which may contain placeholders.
#
# DB[:table].filter(:id=>:$id).call(:first, :id=>1)
# # SELECT * FROM table WHERE id = ? LIMIT 1 -- (1)
@@ -212,11 +210,13 @@ def call(type, bind_variables={}, *values, &block)
prepare(type, nil, *values).call(bind_variables, &block)
end
- # Prepare an SQL statement for later execution. This returns
- # a clone of the dataset extended with PreparedStatementMethods,
- # on which you can call call with the hash of bind variables to
- # do substitution. The prepared statement is also stored in
- # the associated database. The following usage is identical:
+ # Prepare an SQL statement for later execution. Takes a type similar to #call,
+ # and the name symbol of the prepared statement.
+ # This returns a clone of the dataset extended with PreparedStatementMethods,
+ # which you can +call+ with the hash of bind variables to use.
+ # The prepared statement is also stored in
+ # the associated database, where it can be called by name.
+ # The following usage is identical:
#
# ps = DB[:table].filter(:name=>:$name).prepare(:first, :select_by_name)
#
Oops, something went wrong.

0 comments on commit be09262

Please sign in to comment.