Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Some minor grammar fixes #1112

merged 2 commits into from Mar 28, 2015


Copy link

@eliasdorneles eliasdorneles commented Mar 27, 2015

Hey folks, some minor grammar fixes here -- thanks @breno!

@nyov maybe you want to review this too?

Copy link

@nyov nyov commented Mar 27, 2015

sure, here are some additional corrections

diff --git a/docs/intro/overview.rst b/docs/intro/overview.rst
index 577e769..2923b92 100644
--- a/docs/intro/overview.rst
+++ b/docs/intro/overview.rst
@@ -78,14 +78,14 @@ What just happened?

 When you ran the command ``scrapy runspider``, Scrapy looked for a
-Spider definition inside it and ran it through its crawler engine.
+Spider definition inside it and ran it through it's crawler engine.

 The crawl started by making requests to the URLs defined in the ``start_urls``
-attribute (in this case, only the URL for StackOverflow top questions page),
+attribute (in this case, only the URL for StackOverflow top questions page)
 and called the default callback method ``parse``, passing the response object as
-an argument. In the ``parse`` callback, we extract the links to the
+an argument. In the ``parse`` callback we extract the links to the
 question pages using a CSS Selector with a custom extension that allows to get
-the value for an attribute. Then, we yield a few more requests to be sent,
+the value for an attribute. Then we yield a few more requests to be sent,
 registering the method ``parse_question`` as the callback to be called for each
 of them as they finish.

@@ -96,7 +96,7 @@ processed, it can send another request or do other things in the meantime. This
 also means that other requests can keep going even if some request fails or an
 error happens while handling it.

-While this enables you to do very fast crawlings (sending multiple concurrent
+While this enables you to do very fast crawls (sending multiple concurrent
 requests at the same time, in a fault-tolerant way) Scrapy also gives you
 control over the politeness of the crawl through :ref:`a few settings
 <topics-settings-ref>`. You can do things like setting a download delay between

(put it in a file and git apply that)

Copy link
Member Author

@eliasdorneles eliasdorneles commented Mar 28, 2015

@nyov thank you!
I've applied all but the change in its, which I think it's the correct usage there. ;)

eliasdorneles added a commit that referenced this pull request Mar 28, 2015
Some minor grammar fixes
@eliasdorneles eliasdorneles merged commit 0c74821 into scrapy:master Mar 28, 2015
1 check passed
1 check passed
continuous-integration/travis-ci/pr The Travis CI build passed
@eliasdorneles eliasdorneles deleted the eliasdorneles:minor-grammar-fixes branch Mar 28, 2015
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
None yet
Linked issues

Successfully merging this pull request may close these issues.

None yet

2 participants