Skip to content

Commit

Permalink
- added bookmarkTitle to CrawlStart_p.html
Browse files Browse the repository at this point in the history
git-svn-id: https://svn.berlios.de/svnroot/repos/yacy/trunk@5068 6c8d7289-2bf4-0310-a012-ef5d649a1542
  • Loading branch information
apfelmaennchen committed Aug 21, 2008
1 parent b3fc5e9 commit 8d1bedf
Show file tree
Hide file tree
Showing 3 changed files with 18 additions and 10 deletions.
23 changes: 15 additions & 8 deletions htroot/CrawlStart_p.html
Expand Up @@ -16,7 +16,7 @@ <h2>Expert Crawl Start</h2>
You can define URLs as start points for Web page crawling and start crawling here. "Crawling" means that YaCy will download the given website, extract all links in it and then download the content behind these links. This is repeated as long as specified under "Crawling Depth".
</p>

<form action="WatchCrawler_p.html" method="post" enctype="multipart/form-data">
<form name="WatchCrawler" id="WatchCrawler" action="WatchCrawler_p.html" method="post" enctype="multipart/form-data">
<table border="0" cellpadding="5" cellspacing="1">
<tr class="TableHeader">
<td><strong>Attribut</strong></td>
Expand All @@ -31,8 +31,7 @@ <h2>Expert Crawl Start</h2>
<td><label for="url"><nobr>From URL</nobr></label>:</td>
<td><input type="radio" name="crawlingMode" id="url" value="url" checked="checked" /></td>
<td>
<input name="crawlingURL" type="text" size="41" maxlength="256" value="http://" onkeypress="changed()" />
<span id="robotsOK"></span>
<input name="crawlingURL" type="text" size="41" maxlength="256" value="http://" onkeypress="changed()" />
</td>
</tr>
<tr>
Expand All @@ -48,7 +47,11 @@ <h2>Expert Crawl Start</h2>
<td><input type="file" name="crawlingFile" size="28" /></td>
</tr>
<tr>
<td colspan="3" class="commit"><span id="title"><br/></span><img src="/env/grafics/empty.gif" name="ajax" alt="empty" /></td>
<td colspan="3" class="commit">
<span id="robotsOK"></span>
<span id="title"><br/></span>
<img src="/env/grafics/empty.gif" name="ajax" alt="empty" />
</td>
</tr>
</table>
</td>
Expand All @@ -61,10 +64,14 @@ <h2>Expert Crawl Start</h2>
<td>Create Bookmark</td>
<td>
<label for="createBookmark">Use</label>:
<input type="checkbox" name="createBookmark" id="createBookmark" />&nbsp;&nbsp;&nbsp;
<label for="bookmarkFolder"> Bookmark Folder</label>:
<input name="bookmarkFolder" id="bookmarkFolder" type="text" size="20" maxlength="100" value="/crawlStart" /><br />
<br/><br/>This option works with "Starting Point: From URL" only!
<input type="checkbox" name="createBookmark" id="createBookmark" />
&nbsp;&nbsp;&nbsp;(works with "Starting Point: From URL" only)
<br /><br />
<label for="bookmarkTitle"> Title</label>:&nbsp;&nbsp;&nbsp;
<input name="bookmarkTitle" id="bookmarkTitle" type="text" size="50" maxlength="100" /><br /><br />
<label for="bookmarkFolder"> Folder</label>:
<input name="bookmarkFolder" id="bookmarkFolder" type="text" size="50" maxlength="100" value="/crawlStart" />
<br />&nbsp;
</td>
<td>
This option lets you create a bookmark from your crawl start URL. For automatic re-crawling you can use the following default folders:<br/>
Expand Down
2 changes: 1 addition & 1 deletion htroot/WatchCrawler_p.java
Expand Up @@ -216,7 +216,7 @@ public static serverObjects respond(final httpHeader header, final serverObjects
if (post.get("createBookmark","off").equals("on")) {
bookmarksDB.Bookmark bookmark = sb.bookmarksDB.createBookmark(crawlingStart, "admin");
if(bookmark != null){
bookmark.setProperty(bookmarksDB.Bookmark.BOOKMARK_TITLE, crawlingStart);
bookmark.setProperty(bookmarksDB.Bookmark.BOOKMARK_TITLE, post.get("bookmarkTitle", crawlingStart));
bookmark.setOwner("admin");
bookmark.setPublic(false);
bookmark.setTags(tags, true);
Expand Down
3 changes: 2 additions & 1 deletion htroot/js/IndexCreate.js
Expand Up @@ -11,7 +11,8 @@ function handleResponse(){
if(response.getElementsByTagName("title")[0].firstChild!=null){
title=response.getElementsByTagName("title")[0].firstChild.nodeValue;
}
document.getElementById("title").innerHTML=title;
// document.getElementById("title").innerHTML=title;
document.WatchCrawler.bookmarkTitle.value=title

// deterime if crawling is allowed by the robots.txt
robotsOK="";
Expand Down

0 comments on commit 8d1bedf

Please sign in to comment.