Skip to content

Commit

Permalink
Fix typos in comments
Browse files Browse the repository at this point in the history
  • Loading branch information
zachbateman authored and pacman82 committed Sep 1, 2023
1 parent 12cf2b1 commit 6bb0c98
Show file tree
Hide file tree
Showing 24 changed files with 71 additions and 71 deletions.
4 changes: 2 additions & 2 deletions odbc-api/src/buffers/bin_column.rs
Original file line number Diff line number Diff line change
Expand Up @@ -140,7 +140,7 @@ impl BinColumn {

/// View of the first `num_rows` values of a binary column.
///
/// Num rows may not exceed the actually amount of valid num_rows filled be the ODBC API. The
/// Num rows may not exceed the actual amount of valid num_rows filled by the ODBC API. The
/// column buffer does not know how many elements were in the last row group, and therefore can
/// not guarantee the accessed element to be valid and in a defined state. It also can not panic
/// on accessing an undefined element. It will panic however if `row_index` is larger or equal
Expand Down Expand Up @@ -234,7 +234,7 @@ impl BinColumn {
}

/// Appends a new element to the column buffer. Rebinds the buffer to increase maximum element
/// length should the input be to large.
/// length should the input be too large.
///
/// # Parameters
///
Expand Down
8 changes: 4 additions & 4 deletions odbc-api/src/buffers/columnar.rs
Original file line number Diff line number Diff line change
Expand Up @@ -71,7 +71,7 @@ impl<C: ColumnBuffer> ColumnarBuffer<C> {
/// # Parameters
///
/// * `buffer_index`: Please note that the buffer index is not identical to the ODBC column
/// index. For once it is zero based. It also indexes the buffer bound, and not the columns of
/// index. For one it is zero based. It also indexes the buffer bound, and not the columns of
/// the output result set. This is important, because not every column needs to be bound. Some
/// columns may simply be ignored. That being said, if every column of the output is bound in
/// the buffer, in the same order in which they are enumerated in the result set, the
Expand Down Expand Up @@ -143,7 +143,7 @@ pub unsafe trait ColumnBuffer: CDataMut {
where
Self: 'a;

/// Num rows may not exceed the actually amount of valid num_rows filled be the ODBC API. The
/// Num rows may not exceed the actual amount of valid num_rows filled by the ODBC API. The
/// column buffer does not know how many elements were in the last row group, and therefore can
/// not guarantee the accessed element to be valid and in a defined state. It also can not panic
/// on accessing an undefined element.
Expand Down Expand Up @@ -283,7 +283,7 @@ impl TextRowSet {
/// The resulting text buffer is not in any way tied to the cursor, other than that its buffer
/// sizes a tailor fitted to result set the cursor is iterating over.
///
/// This method performs faliable buffer allocations, if no upper bound is set, so you may see
/// This method performs fallible buffer allocations, if no upper bound is set, so you may see
/// a speedup, by setting an upper bound using `max_str_limit`.
///
///
Expand All @@ -298,7 +298,7 @@ impl TextRowSet {
/// sometimes drivers are just not that good at it. This argument allows you to specify an
/// upper bound for the length of character data. Any size reported by the driver is capped to
/// this value. In case the database returns a size of 0 (which some systems used to indicate)
/// arbitrariely large values, the element size is set to upper bound.
/// arbitrarily large values, the element size is set to upper bound.
pub fn for_cursor(
batch_size: usize,
cursor: &mut impl ResultSetMetadata,
Expand Down
4 changes: 2 additions & 2 deletions odbc-api/src/buffers/text_column.rs
Original file line number Diff line number Diff line change
Expand Up @@ -43,7 +43,7 @@ impl<C> TextColumn<C> {
/// This will allocate a value and indicator buffer for `batch_size` elements. Each value may
/// have a maximum length of `max_str_len`. This implies that `max_str_len` is increased by
/// one in order to make space for the null terminating zero at the end of strings. Uses a
/// fallibale allocation for creating the buffer. In applications often the `max_str_len` size
/// fallible allocation for creating the buffer. In applications often the `max_str_len` size
/// of the buffer, might be directly inspired by the maximum size of the type, as reported, by
/// ODBC. Which might get exceedingly large for types like VARCHAR(MAX)
pub fn try_new(batch_size: usize, max_str_len: usize) -> Result<Self, TooLargeBufferSize>
Expand Down Expand Up @@ -461,7 +461,7 @@ where
/// Ensures that the buffer is large enough to hold elements of `element_length`. Does nothing
/// if the buffer is already large enough. Otherwise it will reallocate and rebind the buffer.
/// The first `num_rows_to_copy_elements` will be copied from the old value buffer to the new
/// one. This makes this an extremly expensive operation.
/// one. This makes this an extremely expensive operation.
pub fn ensure_max_element_length(
&mut self,
element_length: usize,
Expand Down
8 changes: 4 additions & 4 deletions odbc-api/src/columnar_bulk_inserter.rs
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ use crate::{

/// Can be used to execute a statement with bulk array paramters. Contrary to its name any statement
/// with parameters can be executed, not only `INSERT` however inserting large amounts of data in
/// batches is the primary intended usecase.
/// batches is the primary intended use case.
///
/// Binding new buffers is quite expensive in ODBC, so the parameter buffers are reused for each
/// batch (so the pointers bound to the statment stay valid). So we copy each batch of data into the
Expand Down Expand Up @@ -104,7 +104,7 @@ where
/// will just hold the value previously assigned to them. Therfore if extending the number of
/// valid rows users should take care to assign values to these rows. However, even if not
/// assigend it is always guaranteed that every cell is valid for insertion and will not cause
/// out of bounds access down in the ODBC driver. Therfore this method is safe. You can set
/// out of bounds access down in the ODBC driver. Therefore this method is safe. You can set
/// the number of valid rows before or after filling values into the buffer, but you must do so
/// before executing the query.
pub fn set_num_rows(&mut self, num_rows: usize) {
Expand All @@ -122,15 +122,15 @@ where
/// # Parameters
///
/// * `buffer_index`: Please note that the buffer index is not identical to the ODBC column
/// index. For once it is zero based. It also indexes the buffer bound, and not the columns of
/// index. For one it is zero based. It also indexes the buffer bound, and not the columns of
/// the output result set. This is important, because not every column needs to be bound. Some
/// columns may simply be ignored. That being said, if every column of the output is bound in
/// the buffer, in the same order in which they are enumerated in the result set, the
/// relationship between column index and buffer index is `buffer_index = column_index - 1`.
///
/// # Example
///
/// This method is intend to be called if using [`ColumnarBulkInserter`] for column wise bulk
/// This method is intended to be called if using [`ColumnarBulkInserter`] for column wise bulk
/// inserts.
///
/// ```no_run
Expand Down
10 changes: 5 additions & 5 deletions odbc-api/src/connection.rs
Original file line number Diff line number Diff line change
Expand Up @@ -74,9 +74,9 @@ impl<'c> Connection<'c> {
/// wrapper allows you to call ODBC functions on the handle, but doesn't care if the connection
/// is in the right state.
///
/// You should not have a need to call this method if your usecase is covered by this library,
/// You should not have a need to call this method if your use case is covered by this library,
/// but, in case it is not, this may help you to break out of the type structure which might be
/// to rigid for you, while simultaniously abondoning its safeguards.
/// to rigid for you, while simultaneously abondoning its safeguards.
pub fn into_handle(self) -> handles::Connection<'c> {
unsafe { handles::Connection::new(ManuallyDrop::new(self).connection.as_sys()) }
}
Expand Down Expand Up @@ -159,7 +159,7 @@ impl<'c> Connection<'c> {

/// In some use cases there you only execute a single statement, or the time to open a
/// connection does not matter users may wish to choose to not keep a connection alive seperatly
/// from the cursor, in order to have an easier time withe the borrow checker.
/// from the cursor, in order to have an easier time with the borrow checker.
///
/// ```no_run
/// use lazy_static::lazy_static;
Expand Down Expand Up @@ -209,7 +209,7 @@ impl<'c> Connection<'c> {
/// Prepares an SQL statement. This is recommended for repeated execution of similar queries.
///
/// Should your use case require you to execute the same query several times with different
/// parameters, prepared queries are the way to go. These gives the database a chance to cache
/// parameters, prepared queries are the way to go. These give the database a chance to cache
/// the access plan associated with your SQL statement. It is not unlike compiling your program
/// once and executing it several times.
///
Expand Down Expand Up @@ -777,7 +777,7 @@ pub fn escape_attribute_value(unescaped: &str) -> Cow<'_, str> {
// Search the string for semicolon (';') if we do not find any, nothing is to do and we can work
// without an extra allocation.
//
// * We escape ';' because it severs as a separator between key=value pairs
// * We escape ';' because it serves as a separator between key=value pairs
// * We escape '+' because passwords with `+` must be escaped on PostgreSQL for some reason.
if unescaped.contains(&[';', '+'][..]) {
// Surround the string with curly braces ('{','}') and escape every closing curly brace by
Expand Down
32 changes: 16 additions & 16 deletions odbc-api/src/cursor.rs
Original file line number Diff line number Diff line change
Expand Up @@ -44,7 +44,7 @@ pub trait Cursor: ResultSetMetadata {
/// instead, for good performance.
///
/// While this method is very convenient due to the fact that the application does not have to
/// declare and bind specific buffers it is also in many situations extremely slow. Concrete
/// declare and bind specific buffers, it is also in many situations extremely slow. Concrete
/// performance depends on the ODBC driver in question, but it is likely it performs a roundtrip
/// to the datasource for each individual row. It is also likely an extra conversion is
/// performed then requesting individual fields, since the C buffer type is not known to the
Expand Down Expand Up @@ -148,8 +148,8 @@ impl<'s> CursorRow<'s> {
// chance that we get it done with one alloctaion. The buffer size being 0 we need at
// least 1 anyway. If the capacity is not `0` we'll leave the buffer size untouched as
// we do not want to prevent users from providing better guessen based on domain
// knowledege.
// This also implicitly makes sure that we can at least hold one terminting zero.
// knowledge.
// This also implicitly makes sure that we can at least hold one terminating zero.
buf.reserve(256);
}
// Utilize all of the allocated buffer.
Expand All @@ -161,7 +161,7 @@ impl<'s> CursorRow<'s> {
let mut remaining_length_known = false;
// We repeatedly fetch data and add it to the buffer. The buffer length is therefore the
// accumulated value size. The target always points to the last window in buf which is going
// to contain the **next** part of the data, thereas buf contains the entire accumulated
// to contain the **next** part of the data, whereas buf contains the entire accumulated
// value so far.
let mut target =
VarCell::<&mut [u8], K>::from_buffer(buf.as_mut_slice(), Indicator::NoTotal);
Expand Down Expand Up @@ -297,7 +297,7 @@ where
{
/// Users of this library are encouraged not to call this constructor directly but rather invoke
/// [`crate::Connection::execute`] or [`crate::Prepared::execute`] to get a cursor and utilize
/// it using the [`crate::Cursor`] trait. This method is pubilc so users with an understanding
/// it using the [`crate::Cursor`] trait. This method is public so users with an understanding
/// of the raw ODBC C-API have a way to create a cursor, after they left the safety rails of the
/// Rust type System, in order to implement a use case not covered yet, by the safe abstractions
/// within this crate.
Expand Down Expand Up @@ -354,7 +354,7 @@ pub unsafe trait RowSetBuffer {
///
/// # Safety
///
/// It's the implementations responsibility to ensure that all bound buffers are valid until
/// It's the implementation's responsibility to ensure that all bound buffers are valid until
/// unbound or the statement handle is deleted.
unsafe fn bind_colmuns_to_cursor(&mut self, cursor: StatementRef<'_>) -> Result<(), Error>;

Expand Down Expand Up @@ -384,19 +384,19 @@ unsafe impl<T: RowSetBuffer> RowSetBuffer for &mut T {
}
}

/// In order to safe on network overhead, it is recommended to use block cursors instead of fetching
/// In order to save on network overhead, it is recommended to use block cursors instead of fetching
/// values individually. This can greatly reduce the time applications need to fetch data. You can
/// create a block cursor by binding preallocated memory to a cursor using [`Cursor::bind_buffer`].
/// A block cursor safes on a lot of IO overhead by fetching an entire set of rows (called *rowset*)
/// at once into the buffer bound to it. Reusing the same buffer for each rowset also safes on
/// A block cursor saves on a lot of IO overhead by fetching an entire set of rows (called *rowset*)
/// at once into the buffer bound to it. Reusing the same buffer for each rowset also saves on
/// allocations. A challange with using block cursors might be database schemas with columns there
/// individual fields can be very large. In these cases developers can choose to:
///
/// 1. Reserve less memory for each individual field than the schema indicates and deciding on a
/// sensible upper bound themselfes. This risks truncation of values though, if they are larger
/// sensible upper bound themselves. This risks truncation of values though, if they are larger
/// than the upper bound. Using [`BlockCursor::fetch_with_truncation_check`] instead of
/// [`Cursor::next_row`] your appliacation can detect these truncations. This is usually the best
/// choice, since individual fields in a table rarerly actuallly take up several GiB of memory.
/// [`Cursor::next_row`] your application can detect these truncations. This is usually the best
/// choice, since individual fields in a table rarely actually take up several GiB of memory.
/// 2. Calculate the number of rows dynamically based on the maximum expected row size.
/// [`crate::buffers::BufferDesc::bytes_per_row`], can be helpful with this task.
/// 3. Not use block cursors and fetch rows slowly with high IO overhead. Calling
Expand Down Expand Up @@ -633,7 +633,7 @@ where
/// `None` if the result set is empty and all row sets have been extracted. `Some` with a
/// reference to the internal buffer otherwise.
///
/// Call this method to find out wether there are any truncated values in the batch, without
/// Call this method to find out whether there are any truncated values in the batch, without
/// inspecting all its rows and columns.
pub async fn fetch_with_truncation_check(
&mut self,
Expand Down Expand Up @@ -693,9 +693,9 @@ fn error_handling_for_fetch(
let has_row = result
.on_success(|| true)
.into_result_with(&stmt.as_stmt_ref(), Some(false), None)
// Oracles ODBC driver does not support 64Bit integers. Furthermore, it does not
// tell the it to the user than binding parameters, but rather now then we fetch
// results. The error code retruned is `HY004` rather then `HY003` which should
// Oracle's ODBC driver does not support 64Bit integers. Furthermore, it does not
// tell it to the user when binding parameters, but rather now then we fetch
// results. The error code returned is `HY004` rather than `HY003` which should
// be used to indicate invalid buffer types.
.provide_context_for_diagnostic(|record, function| {
if record.state == State::INVALID_SQL_DATA_TYPE {
Expand Down
2 changes: 1 addition & 1 deletion odbc-api/src/environment.rs
Original file line number Diff line number Diff line change
Expand Up @@ -440,7 +440,7 @@ impl Environment {
}

/// Get information about available drivers. Only 32 or 64 Bit drivers will be listed, depending
/// on wether you are building a 32 Bit or 64 Bit application.
/// on whether you are building a 32 Bit or 64 Bit application.
///
/// # Example
///
Expand Down
2 changes: 1 addition & 1 deletion odbc-api/src/error.rs
Original file line number Diff line number Diff line change
Expand Up @@ -72,7 +72,7 @@ pub enum Error {
#[error("Sending data to the database at statement execution time failed. IO error:\n{0}")]
FailedReadingInput(io::Error),
/// Driver returned "invalid attribute" then setting the row array size. Most likely the array
/// size is to large. Instead of returing "option value changed (SQLSTATE 01S02)" like suggested
/// size is too large. Instead of returing "option value changed (SQLSTATE 01S02)" as suggested
/// in <https://docs.microsoft.com/en-us/sql/odbc/reference/syntax/sqlsetstmtattr-function> the
/// driver returned an error instead.
#[error(
Expand Down
2 changes: 1 addition & 1 deletion odbc-api/src/handles/column_description.rs
Original file line number Diff line number Diff line change
Expand Up @@ -50,7 +50,7 @@ impl ColumnDescription {
/// In production, an 'empty' [`ColumnDescription`] is expected to be constructed via the
/// [`Default`] trait. It is then filled using [`crate::ResultSetMetadata::describe_col`]. When
/// writing test cases however it might be desirable to directly instantiate a
/// [`ColumnDescription`]. This constructor enabels you to do that, without caring which type
/// [`ColumnDescription`]. This constructor enables you to do that, without caring which type
/// `SqlChar` resolves to.
pub fn new(name: &str, data_type: DataType, nullability: Nullability) -> Self {
#[cfg(feature = "narrow")]
Expand Down
2 changes: 1 addition & 1 deletion odbc-api/src/handles/data_type.rs
Original file line number Diff line number Diff line change
Expand Up @@ -57,7 +57,7 @@ pub enum DataType {
Double,
/// `Varchar(n)`. Variable length character string.
Varchar {
/// Maximum length of the character string (excluding terminating zero). Wether this length
/// Maximum length of the character string (excluding terminating zero). Whether this length
/// is to be interpreted as bytes or Codepoints is ambigious and depends on the datasource.
///
/// E.g. For Microsoft SQL Server this is the binary length, theras for a MariaDB this
Expand Down
2 changes: 1 addition & 1 deletion odbc-api/src/handles/diagnostics.rs
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ use super::{
use odbc_sys::{SqlReturn, SQLSTATE_SIZE};
use std::fmt;

// Starting with odbc 5 we may be able to specify utf8 encoding. until then, we may need to fall
// Starting with odbc 5 we may be able to specify utf8 encoding. Until then, we may need to fall
// back on the 'W' wide function calls.
#[cfg(not(feature = "narrow"))]
use odbc_sys::SQLGetDiagRecW as sql_get_diag_rec;
Expand Down
Loading

0 comments on commit 6bb0c98

Please sign in to comment.