Skip to content

Conversation

@bhaveshAn
Copy link
Member

Fixes #290

Checklist

  • I have read the Contribution & Best practices Guide and my PR follows them.
  • My branch is up-to-date with the Upstream master branch.
  • I have added necessary documentation (if appropriate)

Changes proposed in this pull request:

  • Added Youtube support

@gabru-md @cclauss @mariobehling please review. Providing the heroku deployment at https://immense-anchorage-24063.herokuapp.com/
Thanks !!

@ghost ghost added the needs-review label Oct 28, 2017
@ghost ghost assigned bhaveshAn Oct 28, 2017
@gabru-md
Copy link
Member

@bhaveshAn great work.
I would like to bring it to your notice @mariobehling that we must start searching using the fullname of the scrapper as @cclauss suggests so that we donot face problems like bhavesh had i.e. using tyoutube instead of youtube.
@cclauss please comment. Rest bhavesh did good job

@ghost ghost removed the needs-review label Oct 29, 2017
@ghost
Copy link

ghost commented Oct 29, 2017

Hi @bhaveshAn!

Looks like your PR has some conflicts. 😟
Could you resolve them and rebase on top of the latest upstream code ?


def feedgen(query, engine, count=10):
if engine == 'q':
if engine in ['q', 't']:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Putting this logic in two different files can be the source of future bugs. Please encapsulate this logic in one file or the other but not in both.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@cclauss Please review again

app/server.py Outdated
return bad_request(err)

if engine[0] == 'q':
if engine[0] in ['q', 't']:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Putting this logic in two different files can be the source of future bugs. Please encapsulate this logic in one file or the other but not in both.

self.queryKey = 'search_query'

def parseResponse(self, soup):
""" Parse the response and return set of urls
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is not a set of URLs. It is a list of dicts. A set guarantees not repeats and this code does not do that. The items in the list are dicts that contain titles and links (urls).

[[Tile1,url1], [Title2, url2],..] should be changed to [{'title': Tile1, 'link': url1}, {'title': Tile2, 'link': url2}, ...] to make the result more clear to the reader

@ghost
Copy link

ghost commented Oct 29, 2017

Hi @bhaveshAn!

Looks like your PR has some conflicts. 😟
Could you resolve them and rebase on top of the latest upstream code ?

def parseResponse(self, soup):
""" Parse the response and return list of urls
Returns: urls (list)
[[Tile1,url1], [Title2, url2],..]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The returned values are not urls. They are dicts.

I know these changes were not tested under Python 3.

@ghost
Copy link

ghost commented Nov 1, 2017

Hi @bhaveshAn!

Looks like your PR has some conflicts. 😟
Could you resolve them and rebase on top of the latest upstream code ?

@bhaveshAn
Copy link
Member Author

@mariobehling @niranjan94 Please review.

@ghost
Copy link

ghost commented Nov 2, 2017

Hi @bhaveshAn!

Looks like your PR has some conflicts. 😟
Could you resolve them and rebase on top of the latest upstream code ?

@mariobehling
Copy link
Member

Thanks!

@mariobehling mariobehling merged commit 08550e1 into fossasia:master Nov 2, 2017
@ghost ghost added the ready-to-ship label Nov 2, 2017
@bhaveshAn bhaveshAn deleted the bhavesh-youtube branch November 2, 2017 16:51
cclauss pushed a commit to cclauss/query-server that referenced this pull request Nov 3, 2017
Implement absolute_import in Scrapers w/ tests
cclauss pushed a commit to cclauss/query-server that referenced this pull request Nov 3, 2017
Implement absolute_import in Scrapers w/ tests
@bhaveshAn bhaveshAn mentioned this pull request Nov 3, 2017
cclauss pushed a commit to cclauss/query-server that referenced this pull request Nov 7, 2017
Implement absolute_import in Scrapers w/ tests
cclauss pushed a commit to cclauss/query-server that referenced this pull request Nov 12, 2017
Implement absolute_import in Scrapers w/ tests
cclauss pushed a commit to cclauss/query-server that referenced this pull request Nov 12, 2017
Implement absolute_import in Scrapers w/ tests
cclauss pushed a commit to cclauss/query-server that referenced this pull request Nov 12, 2017
Implement absolute_import in Scrapers w/ tests
cclauss pushed a commit to cclauss/query-server that referenced this pull request Nov 12, 2017
Implement absolute_import in Scrapers w/ tests
cclauss pushed a commit to cclauss/query-server that referenced this pull request Nov 17, 2017
Implement absolute_import in Scrapers w/ tests
Remorax pushed a commit to Remorax/query-server that referenced this pull request Jan 14, 2018
…fossasia#291)

Now for every badge folder(CSV file) only one background image file is created.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Add Youtube support

4 participants