Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

docs: rewrite to present tense #11713

Closed
wants to merge 3 commits into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
The table of contents is too big for display.
Diff view
Diff view
  •  
  •  
  •  
2 changes: 1 addition & 1 deletion docs/CHECKSRC.md
Expand Up @@ -41,7 +41,7 @@ warnings are:
more appropriate `char *name` style. The asterisk should sit right next to
the name without a space in between.

- `BADCOMMAND`: There's a bad `checksrc` instruction in the code. See the
- `BADCOMMAND`: There is a bad `checksrc` instruction in the code. See the
**Ignore certain warnings** section below for details.

- `BANNEDFUNC`: A banned function was used. The functions sprintf, vsprintf,
Expand Down
2 changes: 1 addition & 1 deletion docs/CODE_STYLE.md
Expand Up @@ -60,7 +60,7 @@ Source code in curl may never be wider than 79 columns and there are two
reasons for maintaining this even in the modern era of large and high
resolution screens:

1. Narrower columns are easier to read than wide ones. There's a reason
1. Narrower columns are easier to read than wide ones. There is a reason
newspapers have used columns for decades or centuries.

2. Narrower columns allow developers to easier show multiple pieces of code
Expand Down
14 changes: 7 additions & 7 deletions docs/CONTRIBUTE.md
Expand Up @@ -206,11 +206,11 @@ A short guide to how to write git commit messages in the curl project.
[Bug: URL to the source of the report or more related discussion; use Fixes
for GitHub issues instead when that is appropriate]
[Approved-by: John Doe - credit someone who approved the PR; if you are
committing this for someone else using --author=... you don't need this
committing this for someone else using --author=... you do not need this
as you are implicitly approving it by committing]
[Authored-by: John Doe - credit the original author of the code; only use
this if you can't use "git commit --author=..."]
[Signed-off-by: John Doe - we don't use this, but don't bother removing it]
this if you cannot use "git commit --author=..."]
[Signed-off-by: John Doe - we do not use this, but do not bother removing it]
[whatever-else-by: credit all helpers, finders, doers; try to use one of
the following keywords if at all possible, for consistency:
Acked-by:, Assisted-by:, Co-authored-by:, Found-by:, Reported-by:,
Expand All @@ -232,17 +232,17 @@ The first line is a succinct description of the change:
- no period (.) at the end

The `[area]` in the first line can be `http2`, `cookies`, `openssl` or
similar. There's no fixed list to select from but using the same "area" as
similar. There is no fixed list to select from but using the same "area" as
other related changes could make sense.

Do not forget to use commit --author=... if you commit someone else's work, and
make sure that you have your own user and email setup correctly in git before
you commit.

Add whichever header lines as appropriate, with one line per person if more
than one person was involved. There's no need to credit yourself unless you are
using --author=... which hides your identity. Don't include people's e-mail
addresses in headers to avoid spam, unless they're already public from a
than one person was involved. There is no need to credit yourself unless you are
using --author=... which hides your identity. Do not include people's e-mail
addresses in headers to avoid spam, unless they are already public from a
previous commit; saying `{userid} on github` is OK.

### Write Access to git Repository
Expand Down
2 changes: 1 addition & 1 deletion docs/DYNBUF.md
Expand Up @@ -27,7 +27,7 @@ void Curl_dyn_free(struct dynbuf *s);
```

Free the associated memory and clean up. After a free, the `dynbuf` struct can
be re-used to start appending new data to.
be reused to start appending new data to.

## `Curl_dyn_addn`

Expand Down
2 changes: 1 addition & 1 deletion docs/EARLY-RELEASE.md
Expand Up @@ -49,7 +49,7 @@ the three ones above are all 'no'.
- Is there a (decent) workaround?
- Is it a regression? Is the bug introduced in this release?
- Can the bug be fixed "easily" by applying a patch?
- Does the bug break the build? Most users don't build curl themselves.
- Does the bug break the build? Most users do not build curl themselves.
- How long is it until the already scheduled next release?
- Can affected users safely rather revert to a former release until the next
scheduled release?
Expand Down
2 changes: 1 addition & 1 deletion docs/FAQ
Expand Up @@ -1498,7 +1498,7 @@ FAQ
unknown to me).

After a transfer, you just set new options in the handle and make another
transfer. This will make libcurl re-use the same connection if it can.
transfer. This will make libcurl reuse the same connection if it can.

7.4 Does PHP/CURL have dependencies?

Expand Down
2 changes: 1 addition & 1 deletion docs/HELP-US.md
Expand Up @@ -38,7 +38,7 @@ even maybe not a terribly experienced developer, here's our advice:

In the issue tracker we occasionally mark bugs with [help
wanted](https://github.com/curl/curl/labels/help%20wanted), as a sign that the
bug is acknowledged to exist and that there's nobody known to work on this
bug is acknowledged to exist and that there is nobody known to work on this
issue for the moment. Those are bugs that are fine to "grab" and provide a
pull request for. The complexity level of these will of course vary, so pick
one that piques your interest.
Expand Down
2 changes: 1 addition & 1 deletion docs/HISTORY.md
Expand Up @@ -161,7 +161,7 @@ Starting with 7.10, curl verifies SSL server certificates by default.
January: Started working on the distributed curl tests. The autobuilds.

February: the curl site averages at 20000 visits weekly. At any given moment,
there's an average of 3 people browsing the website.
there is an average of 3 people browsing the website.

Multiple new authentication schemes are supported: Digest (May), NTLM (June)
and Negotiate (June).
Expand Down
4 changes: 2 additions & 2 deletions docs/HTTP2.md
Expand Up @@ -55,14 +55,14 @@ connection.

To take advantage of multiplexing, you need to use the multi interface and set
`CURLMOPT_PIPELINING` to `CURLPIPE_MULTIPLEX`. With that bit set, libcurl will
attempt to re-use existing HTTP/2 connections and just add a new stream over
attempt to reuse existing HTTP/2 connections and just add a new stream over
that when doing subsequent parallel requests.

While libcurl sets up a connection to an HTTP server there is a period during
which it does not know if it can pipeline or do multiplexing and if you add
new transfers in that period, libcurl will default to start new connections
for those transfers. With the new option `CURLOPT_PIPEWAIT` (added in 7.43.0),
you can ask that a transfer should rather wait and see in case there's a
you can ask that a transfer should rather wait and see in case there is a
connection for the same host in progress that might end up being possible to
multiplex on. It favors keeping the number of connections low to the cost of
slightly longer time to first byte transferred.
Expand Down
2 changes: 1 addition & 1 deletion docs/HTTP3.md
Expand Up @@ -340,7 +340,7 @@ should be either in your PATH or your current directory.
Create a `Caddyfile` with the following content:
~~~
localhost:7443 {
respond "Hello, world! You're using {http.request.proto}"
respond "Hello, world! you are using {http.request.proto}"
}
~~~

Expand Down
12 changes: 6 additions & 6 deletions docs/KNOWN_BUGS
Expand Up @@ -101,8 +101,8 @@ problems may have been fixed or changed somewhat since this was written.
15.13 CMake build with MIT Kerberos does not work

16. aws-sigv4
16.1 aws-sigv4 doesn't sign requests with * correctly
16.2 aws-sigv4 doesn't sign requests with valueless queries correctly
16.1 aws-sigv4 does not sign requests with * correctly
16.2 aws-sigv4 does not sign requests with valueless queries correctly
16.3 aws-sigv4 is missing the amz-content-sha256 header
16.4 aws-sigv4 does not sort query string parameters before signing
16.5 aws-sigv4 does not sign requests with empty URL query correctly
Expand Down Expand Up @@ -516,8 +516,8 @@ problems may have been fixed or changed somewhat since this was written.

13.2 Trying local ports fails on Windows

This makes '--local-port [range]' to not work since curl can't properly
detect if a port is already in use, so it'll try the first port, use that and
This makes '--local-port [range]' to not work since curl cannot properly
detect if a port is already in use, so it will try the first port, use that and
then subsequently fail anyway if that was actually in use.

https://github.com/curl/curl/issues/8112
Expand Down Expand Up @@ -581,11 +581,11 @@ problems may have been fixed or changed somewhat since this was written.

16. aws-sigv4

16.1 aws-sigv4 doesn't sign requests with * correctly
16.1 aws-sigv4 does not sign requests with * correctly

https://github.com/curl/curl/issues/7559

16.2 aws-sigv4 doesn't sign requests with valueless queries correctly
16.2 aws-sigv4 does not sign requests with valueless queries correctly

https://github.com/curl/curl/issues/8107

Expand Down
2 changes: 1 addition & 1 deletion docs/MAIL-ETIQUETTE
Expand Up @@ -120,7 +120,7 @@ MAIL ETIQUETTE
your email address and password and press the unsubscribe button.

Also, the instructions to unsubscribe are included in the headers of every
mail that is sent out to all curl related mailing lists and there's a footer
mail that is sent out to all curl related mailing lists and there is a footer
in each mail that links to the "admin" page on which you can unsubscribe and
change other options.

Expand Down
2 changes: 1 addition & 1 deletion docs/THANKS-filter
Expand Up @@ -26,7 +26,7 @@
# appropriately in THANKS. This list contains variations of their names and
# their "canonical" name. This file is used for scripting purposes to avoid
# duplicate entries and will not be included in release tarballs.
# When removing dupes that aren't identical names from THANKS, add a line
# When removing dupes that are not identical names from THANKS, add a line
# here!
#
# Used-by: contributors.sh
Expand Down
20 changes: 10 additions & 10 deletions docs/TODO
Expand Up @@ -685,7 +685,7 @@

6.4 exit immediately upon connection if stdin is /dev/null

If it did, curl could be used to probe if there's an server there listening
If it did, curl could be used to probe if there is an server there listening
on a specific port. That is, the following command would exit immediately
after the connection is established with exit code 0:

Expand Down Expand Up @@ -819,9 +819,9 @@
request as well, when they should only be necessary once per SSL context (or
once per handle)". The major improvement we can rather easily do is to make
sure we do not create and kill a new SSL "context" for every request, but
instead make one for every connection and re-use that SSL context in the same
style connections are re-used. It will make us use slightly more memory but
it will libcurl do less creations and deletions of SSL contexts.
instead make one for every connection and reuse that SSL context in the same
style connections are reused. It will make us use slightly more memory but it
will libcurl do less creations and deletions of SSL contexts.

Technically, the "caching" is probably best implemented by getting added to
the share interface so that easy handles who want to and can reuse the
Expand All @@ -841,7 +841,7 @@

OpenSSL supports a callback for customised verification of the peer
certificate, but this does not seem to be exposed in the libcurl APIs. Could
it be? There's so much that could be done if it were.
it be? There is so much that could be done if it were.

13.7 Less memory massaging with Schannel

Expand Down Expand Up @@ -1101,7 +1101,7 @@
18.13 Ratelimit or wait between serial requests

Consider a command line option that can make curl do multiple serial requests
slow, potentially with a (random) wait between transfers. There's also a
slow, potentially with a (random) wait between transfers. There is also a
proposed set of standard HTTP headers to let servers let the client adapt to
its rate limits:
https://www.ietf.org/id/draft-polli-ratelimit-headers-02.html
Expand Down Expand Up @@ -1139,7 +1139,7 @@
URL, the file name is not extracted and used from the newly redirected-to URL
even if the new URL may have a much more sensible file name.

This is clearly documented and helps for security since there's no surprise
This is clearly documented and helps for security since there is no surprise
to users which file name that might get overwritten. But maybe a new option
could allow for this or maybe -J should imply such a treatment as well as -J
already allows for the server to decide what file name to use so it already
Expand Down Expand Up @@ -1341,9 +1341,9 @@

20.5 Add support for concurrent connections

Tests 836, 882 and 938 were designed to verify that separate connections
are not used when using different login credentials in protocols that
should not re-use a connection under such circumstances.
Tests 836, 882 and 938 were designed to verify that separate connections are
not used when using different login credentials in protocols that should not
reuse a connection under such circumstances.

Unfortunately, ftpserver.pl does not appear to support multiple concurrent
connections. The read while() loop seems to loop until it receives a
Expand Down
4 changes: 2 additions & 2 deletions docs/URL-SYNTAX.md
Expand Up @@ -197,7 +197,7 @@ of Windows.

## Port number

If there's a colon after the hostname, that should be followed by the port
If there is a colon after the hostname, that should be followed by the port
number to use. 1 - 65535. curl also supports a blank port number field - but
only if the URL starts with a scheme.

Expand Down Expand Up @@ -379,7 +379,7 @@ The default smtp port is 25. Some servers use port 587 as an alternative.

## RTMP

There's no official URL spec for RTMP so libcurl uses the URL syntax supported
There is no official URL spec for RTMP so libcurl uses the URL syntax supported
by the underlying librtmp library. It has a syntax where it wants a
traditional URL, followed by a space and a series of space-separated
`name=value` pairs.
Expand Down
2 changes: 1 addition & 1 deletion docs/cmdline-opts/form-string.d
Expand Up @@ -13,5 +13,5 @@ Multi: append
Similar to --form except that the value string for the named parameter is used
literally. Leading '@' and '<' characters, and the ';type=' string in
the value have no special meaning. Use this in preference to --form if
there's any possibility that the string value may accidentally trigger the
there is any possibility that the string value may accidentally trigger the
'@' or '<' features of --form.
7 changes: 3 additions & 4 deletions docs/cmdline-opts/ftp-skip-pasv-ip.d
Expand Up @@ -9,10 +9,9 @@ Category: ftp
Example: --ftp-skip-pasv-ip ftp://example.com/
Multi: boolean
---
Tell curl to not use the IP address the server suggests in its response
to curl's PASV command when curl connects the data connection. Instead curl
will re-use the same IP address it already uses for the control
connection.
Tell curl to not use the IP address the server suggests in its response to
curl's PASV command when curl connects the data connection. Instead curl will
reuse the same IP address it already uses for the control connection.

Since curl 7.74.0 this option is enabled by default.

Expand Down
4 changes: 2 additions & 2 deletions docs/cmdline-opts/gen.pl
Expand Up @@ -33,7 +33,7 @@

We open *input* files in :crlf translation (a no-op on many platforms) in
case we have CRLF line endings in Windows but a perl that defaults to LF.
Unfortunately it seems some perls like msysgit can't handle a global input-only
Unfortunately it seems some perls like msysgit cannot handle a global input-only
:crlf so it has to be specified on each file open for text input.

=end comment
Expand Down Expand Up @@ -183,7 +183,7 @@ sub too_old {
sub added {
my ($standalone, $data)=@_;
if(too_old($data)) {
# don't mention ancient additions
# do not mention ancient additions
return "";
}
if($standalone) {
Expand Down
2 changes: 1 addition & 1 deletion docs/cmdline-opts/noproxy.d
Expand Up @@ -17,7 +17,7 @@ example, local.com would match local.com, local.com:80, and www.local.com, but
not www.notlocal.com.

Since 7.53.0, This option overrides the environment variables that disable the
proxy ('no_proxy' and 'NO_PROXY'). If there's an environment variable
proxy ('no_proxy' and 'NO_PROXY'). If there is an environment variable
disabling a proxy, you can set the no proxy list to "" to override it.

Since 7.86.0, IP addresses specified to this option can be provided using CIDR
Expand Down
4 changes: 2 additions & 2 deletions docs/cmdline-opts/page-header
Expand Up @@ -51,9 +51,9 @@ in a sequential manner in the specified order unless you use --parallel. You
can specify command line options and URLs mixed and in any order on the
command line.

curl attempts to re-use connections when doing multiple transfers, so that
curl attempts to reuse connections when doing multiple transfers, so that
getting many files from the same server do not use multiple connects and setup
handshakes. This improves speed. Connection re-use can only be done for URLs
handshakes. This improves speed. Connection reuse can only be done for URLs
specified for a single command line invocation and cannot be performed between
separate curl runs.

Expand Down
2 changes: 1 addition & 1 deletion docs/cmdline-opts/proxy.d
Expand Up @@ -31,7 +31,7 @@ If the port number is not specified in the proxy string, it is assumed to be
1080.

This option overrides existing environment variables that set the proxy to
use. If there's an environment variable setting a proxy, you can set proxy to
use. If there is an environment variable setting a proxy, you can set proxy to
"" to override it.

All operations that are performed over an HTTP proxy will transparently be
Expand Down
2 changes: 1 addition & 1 deletion docs/cmdline-opts/remote-header-name.d
Expand Up @@ -23,7 +23,7 @@ in the destination directory, it will not be overwritten and an error will
occur - unless you allow it by using the --clobber option. If the server does
not specify a file name then this option has no effect.

There's no attempt to decode %-sequences (yet) in the provided file name, so
There is no attempt to decode %-sequences (yet) in the provided file name, so
this option may provide you with rather unexpected file names.

This feature uses the name from the "filename" field, it does not yet support
Expand Down
2 changes: 1 addition & 1 deletion docs/examples/Makefile.example
Expand Up @@ -34,7 +34,7 @@ CC = gcc
# Compiler flags, -g for debug, -c to make an object file
CFLAGS = -c -g

# This should point to a directory that holds libcurl, if it isn't
# This should point to a directory that holds libcurl, if it is not
# in the system's standard lib dir
# We also set a -L to include the directory where we have the openssl
# libraries
Expand Down
2 changes: 1 addition & 1 deletion docs/examples/Makefile.inc
Expand Up @@ -129,7 +129,7 @@ check_PROGRAMS = \
websocket-cb

# These examples require external dependencies that may not be commonly
# available on POSIX systems, so don't bother attempting to compile them here.
# available on POSIX systems, so do not bother attempting to compile them here.
COMPLICATED_EXAMPLES = \
cacertinmem.c \
crawler.c \
Expand Down
2 changes: 1 addition & 1 deletion docs/examples/anyauthput.c
Expand Up @@ -60,7 +60,7 @@ static int my_seek(void *userp, curl_off_t offset, int origin)
FILE *fp = (FILE *) userp;

if(-1 == fseek(fp, (long) offset, origin))
/* couldn't seek */
/* could not seek */
return CURL_SEEKFUNC_CANTSEEK;

return CURL_SEEKFUNC_OK; /* success! */
Expand Down
2 changes: 1 addition & 1 deletion docs/examples/ftpget.c
Expand Up @@ -66,7 +66,7 @@ int main(void)
*/
curl_easy_setopt(curl, CURLOPT_URL,
"ftp://ftp.example.com/curl/curl-7.9.2.tar.gz");
/* Define our callback to get called when there's data to be written */
/* Define our callback to get called when there is data to be written */
curl_easy_setopt(curl, CURLOPT_WRITEFUNCTION, my_fwrite);
/* Set a pointer to our struct to pass to the callback */
curl_easy_setopt(curl, CURLOPT_WRITEDATA, &ftpfile);
Expand Down
2 changes: 1 addition & 1 deletion docs/examples/ftpsget.c
Expand Up @@ -70,7 +70,7 @@ int main(void)
*/
curl_easy_setopt(curl, CURLOPT_URL,
"ftp://user@server/home/user/file.txt");
/* Define our callback to get called when there's data to be written */
/* Define our callback to get called when there is data to be written */
curl_easy_setopt(curl, CURLOPT_WRITEFUNCTION, my_fwrite);
/* Set a pointer to our struct to pass to the callback */
curl_easy_setopt(curl, CURLOPT_WRITEDATA, &ftpfile);
Expand Down
2 changes: 1 addition & 1 deletion docs/examples/http2-pushinmemory.c
Expand Up @@ -92,7 +92,7 @@ static void setup(CURL *hnd)
curl_easy_setopt(hnd, CURLOPT_PIPEWAIT, 1L);
}

/* called when there's an incoming push */
/* called when there is an incoming push */
static int server_push_callback(CURL *parent,
CURL *easy,
size_t num_headers,
Expand Down