Skip to content

Commit

Permalink
fixed external link
Browse files Browse the repository at this point in the history
  • Loading branch information
Orbiter committed Jun 27, 2014
1 parent 74206a1 commit 1b279d7
Showing 1 changed file with 2 additions and 2 deletions.
4 changes: 2 additions & 2 deletions htroot/CrawlStartExpert.html
Expand Up @@ -204,7 +204,7 @@
#%env/templates/submenuIndexCreate.template%#

<div id="api">
<a href="http://www.yacy-websuche.de/wiki/index.php/Dev:API#Managing_crawl_jobs" id="apilink" target="_blank"><img src="env/grafics/api.png" width="60" height="40" alt="API"/></a>
<a href="http://www.yacy-websearch.net/wiki/index.php/Dev:APICrawler" id="apilink" target="_blank"><img src="env/grafics/api.png" width="60" height="40" alt="API"/></a>
<span>Click on this API button to see a documentation of the POST request parameter for crawl starts.</span>
</div>

Expand All @@ -215,7 +215,7 @@ <h2>Expert Crawl Start</h2>
You can define URLs as start points for Web page crawling and start crawling here.
"Crawling" means that YaCy will download the given website, extract all links in it and then download the content behind these links.
This is repeated as long as specified under "Crawling Depth".
A crawl can also be started using wget and the <a href="http://www.yacy-websuche.de/wiki/index.php/Dev:API#Managing_crawl_jobs" target="_blank">post arguments</a> for this web page.
A crawl can also be started using wget and the <a href="http://www.yacy-websearch.net/wiki/index.php/Dev:APICrawler" target="_blank">post arguments</a> for this web page.
</p>

<form id="Crawler" action="Crawler_p.html" method="post" enctype="multipart/form-data" accept-charset="UTF-8">
Expand Down

0 comments on commit 1b279d7

Please sign in to comment.