Skip to content

HTTPS clone URL

Subversion checkout URL

You can clone with
or
.
Download ZIP

Loading…

ADD: No-Op scraper support to allow nfo based only library #1192

Merged
merged 1 commit into from

5 participants

@koying
Collaborator

As discussed here: http://forum.xbmc.org/showthread.php?tid=127896&pid=1152628#pid1152628

Allows to have a nfo-based only library.

@mkortstiege
Collaborator

In case other scrapers are enabled you'll still fallback and scrape from the web. You should add a noop check in NfoFile.cpp:104+ to prevent this - in case its the desired behavior?

Mind moving the Scraper.cpp returns after the DEBUG log blocks so we can see what scraper is used in the log?

@koying
Collaborator

Thanks. Updated.

@ghost

if we absolutely need such a mode it should be done by code, not by a butchered scraper + id checks.

some tick in the content dialog 'fetch online data' or thereabout. if not enabled on the source, scanner handles appropriately.

@koying
Collaborator

That was my first idea, but won't that break the skins (or require updating tall of hem)? I'm not familiar on how UI stuff works in XBMC.

@ghost

no, the buttons in that dialog are filled from code.

@koying
Collaborator

Not quite, unless I'm mistaken.

CGUIDialogContentSettings defines:

#define CONTROL_CONTENT_TYPE        3
#define CONTROL_SCRAPER_LIST        4
#define CONTROL_SCRAPER_SETTINGS    6

If we want it to do the proper way, "CONTROL_SCRAPER_LIST" & "CONTROL_SCRAPER_SETTINGS" should be under another container which would also contain the "Use local info only" checkbox.
The CONTROL_SCRAPER_... should be disabled if the checkbox is ticked.

So, IMHO, that would require skin rework.

Another solution would be to put the checkbox under CONTROL_SCRAPER_SETTINGS. That would prevent skin issues and make the code prettier, but would be a usability nonsense ;)

@ghost

Then skin changes it is

@koying
Collaborator

Nevermind.
Scrapers are assumed everywhere so, besides butchering the skins, implementing a "no-scraper" checkbox would take a lot of time, be error-prone and butcher the code for zero functional advantage over my "butchered scraper".

I'd suggest to leave the pull request open so that if some need the functionality, they can use this easily mergeable patch.

@jmarshallnz
Owner

You could do it without skin changes by having an internal metadata.null or similar as you've already done, but I don't think you'd need a noop addition to the scraper XML - rather you could just set the library XML to empty and assume XML empty == noop ?

@koying
Collaborator

Sure. Would the patch be deemed acceptable without the "noop" attribute, even withe the "butchered" scrapper?
If so, I'll dig into it.

@SlrG

+1 for integrating this into XBMC. In my opinion it is a desperately needed feature for people using external scrapers like Ember Media Manager to have full control of the sraping process and make sure only local data is used.

Thank you very much for your work Koying!

@XBMC Team: Please reconsider this pull-request. :)

@jmarshallnz
Owner

@koying: IMO if library="" then we can assume the scraper is a noop and other stuff can flow from there, yes.

@cptspiff: is that an acceptable compromise?

@ghost

yes anything without a useless butchered scraper will fly.

@koying
Collaborator

@jmarshallnz I might be wrong, but seeing:

CAddon::CAddon(const AddonProps &props)
  : m_props(props)
  , m_parent(AddonPtr())
{
  if (props.libname.IsEmpty()) BuildLibName();
  else m_strLibName = props.libname;

... I understand that, if the library name is blank, it means use the default name ("default.xml"?), so this couldn't be used to detect an "noop" addon.

@cptspiff
Would

<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<scraper framework="1.1">
</scraper>

be acceptable?
IMO, in this instance, an empty scrapper is not "butchered useless". It is just a scrapper doing nothing, which is what we seek and a valid case.
The surrounding code is just a mean to handle this case of empty scrapper.

@koying
Collaborator

@jmarshallnz @cptspiff Is it ok for merge in the current state?

@jmarshallnz
Owner

I guess the only thing I don't like about it is that it's a special case (i.e. we have to remember to take care of IsNoop() in a bunch of places). How many of those are actually required - i.e. what happens currently with a scraper without CreateSearchURL() et. al. ?

Hmm, maybe it's just the naming I don't like (maybe scraper->CanScrape()?) ?

@koying
Collaborator

IIRC, without the noop(), the import fails and nothing is imported, which is logical.
There was a loophole at one point, because a badly written scrapper I used added "empty" items even if it was not found in its database.
Would you prefer something like that? Seems even dirtier to me ;)

@jmarshallnz
Owner

I think the key is to have as little special-casing as possible. If what you have is minimal, then that's fine.

@MartijnKaijser

@koying
please rebase

@koying koying merged commit 9fdb68a into xbmc:master
@koying koying deleted the koying:noop-scraper branch
@LongChair LongChair referenced this pull request from a commit in plexinc/plex-home-theater-public
@LongChair LongChair Fix <selectedcolor> doesn't work for ListItem.IsPlaying item in Playe…
…rControls #1192

Seemed to be due to selected state of items being incorrect due to the fact that listgroups selected states are not being propagated in sub listgroups controls.
e1afd1c
@LongChair LongChair referenced this pull request from a commit in RasPlex/plex-home-theatre
@LongChair LongChair Fix <selectedcolor> doesn't work for ListItem.IsPlaying item in Playe…
…rControls #1192

Seemed to be due to selected state of items being incorrect due to the fact that listgroups selected states are not being propagated in sub listgroups controls.
063c419
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
This page is out of date. Refresh to see the latest.
View
29 addons/metadata.local/addon.xml
@@ -0,0 +1,29 @@
+<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
+<addon id="metadata.local"
+ name="Local information only"
+ version="1.0.0"
+ provider-name="Team XBMC">
+ <requires>
+ <import addon="xbmc.metadata" version="1.0"/>
+ </requires>
+ <extension point="xbmc.metadata.scraper.albums"
+ language="multi"
+ library="local.xml"/>
+ <extension point="xbmc.metadata.scraper.artists"
+ language="multi"
+ library="local.xml"/>
+ <extension point="xbmc.metadata.scraper.musicvideos"
+ language="multi"
+ library="local.xml"/>
+ <extension point="xbmc.metadata.scraper.tvshows"
+ language="multi"
+ library="local.xml"/>
+ <extension point="xbmc.metadata.scraper.movies"
+ language="multi"
+ library="local.xml"/>
+ <extension point="xbmc.addon.metadata">
+ <summary lang="en">Local Infomation only pseudo-scraper</summary>
+ <description lang="en">Use local information only</description>
+ <platform>all</platform>
+ </extension>
+</addon>
View
3  addons/metadata.local/local.xml
@@ -0,0 +1,3 @@
+<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
+<scraper framework="1.1">
+</scraper>
View
5 xbmc/NfoFile.cpp
@@ -131,6 +131,11 @@ CNfoFile::NFOResult CNfoFile::Create(const CStdString& strPath, const ScraperPtr
// return value: 0 - success; 1 - no result; skip; 2 - error
int CNfoFile::Scrape(ScraperPtr& scraper)
{
+ if (scraper->IsNoop())
+ {
+ m_scurl = CScraperUrl();
+ return 0;
+ }
if (scraper->Type() != m_type)
return 1;
scraper->ClearCache();
View
32 xbmc/addons/Scraper.cpp
@@ -412,12 +412,23 @@ bool CScraper::IsInUse() const
return false;
}
+bool CScraper::IsNoop()
+{
+ if (!Load())
+ throw CScraperError();
+
+ return m_parser.IsNoop();
+}
+
// pass in contents of .nfo file; returns URL (possibly empty if none found)
// and may populate strId, or throws CScraperError on error
CScraperUrl CScraper::NfoUrl(const CStdString &sNfoContent)
{
CScraperUrl scurlRet;
+ if (IsNoop())
+ return scurlRet;
+
// scraper function takes contents of .nfo file, returns XML (see below)
vector<CStdString> vcsIn;
vcsIn.push_back(sNfoContent);
@@ -490,14 +501,18 @@ std::vector<CScraperUrl> CScraper::FindMovie(XFILE::CCurlFile &fcurl, const CStd
CStdString sTitle, sTitleYear, sYear;
CUtil::CleanString(sMovie, sTitle, sTitleYear, sYear, true/*fRemoveExt*/, fFirst);
- if (!fFirst || Content() == CONTENT_MUSICVIDEOS)
- sTitle.Replace("-"," ");
-
CLog::Log(LOGDEBUG, "%s: Searching for '%s' using %s scraper "
"(path: '%s', content: '%s', version: '%s')", __FUNCTION__, sTitle.c_str(),
Name().c_str(), Path().c_str(),
ADDON::TranslateContent(Content()).c_str(), Version().c_str());
+ std::vector<CScraperUrl> vcscurl;
+ if (IsNoop())
+ return vcscurl;
+
+ if (!fFirst || Content() == CONTENT_MUSICVIDEOS)
+ sTitle.Replace("-"," ");
+
sTitle.ToLower();
vector<CStdString> vcsIn(1);
@@ -509,7 +524,6 @@ std::vector<CScraperUrl> CScraper::FindMovie(XFILE::CCurlFile &fcurl, const CStd
// request a search URL from the title/filename/etc.
CScraperUrl scurl;
vector<CStdString> vcsOut = Run("CreateSearchUrl", scurl, fcurl, &vcsIn);
- std::vector<CScraperUrl> vcscurl;
if (vcsOut.empty())
{
CLog::Log(LOGDEBUG, "%s: CreateSearchUrl failed", __FUNCTION__);
@@ -616,6 +630,10 @@ std::vector<CMusicAlbumInfo> CScraper::FindAlbum(CCurlFile &fcurl, const CStdStr
sAlbum.c_str(), Name().c_str(), Path().c_str(),
ADDON::TranslateContent(Content()).c_str(), Version().c_str());
+ std::vector<CMusicAlbumInfo> vcali;
+ if (IsNoop())
+ return vcali;
+
// scraper function is given the album and artist as parameters and
// returns an XML <url> element parseable by CScraperUrl
std::vector<CStdString> extras(2);
@@ -628,7 +646,6 @@ std::vector<CMusicAlbumInfo> CScraper::FindAlbum(CCurlFile &fcurl, const CStdStr
if (vcsOut.size() > 1)
CLog::Log(LOGWARNING, "%s: scraper returned multiple results; using first", __FUNCTION__);
- std::vector<CMusicAlbumInfo> vcali;
if (vcsOut.empty() || vcsOut[0].empty())
return vcali;
scurl.ParseString(vcsOut[0]);
@@ -710,6 +727,10 @@ std::vector<CMusicArtistInfo> CScraper::FindArtist(CCurlFile &fcurl,
Name().c_str(), Path().c_str(),
ADDON::TranslateContent(Content()).c_str(), Version().c_str());
+ std::vector<CMusicArtistInfo> vcari;
+ if (IsNoop())
+ return vcari;
+
// scraper function is given the artist as parameter and
// returns an XML <url> element parseable by CScraperUrl
std::vector<CStdString> extras(1);
@@ -718,7 +739,6 @@ std::vector<CMusicArtistInfo> CScraper::FindArtist(CCurlFile &fcurl,
CScraperUrl scurl;
vector<CStdString> vcsOut = RunNoThrow("CreateArtistSearchUrl", scurl, fcurl, &extras);
- std::vector<CMusicArtistInfo> vcari;
if (vcsOut.empty() || vcsOut[0].empty())
return vcari;
scurl.ParseString(vcsOut[0]);
View
1  xbmc/addons/Scraper.h
@@ -116,6 +116,7 @@ class CScraper : public CAddon
bool Supports(const CONTENT_TYPE &content) const;
bool IsInUse() const;
+ bool IsNoop();
// scraper media functions
CScraperUrl NfoUrl(const CStdString &sNfoContent);
View
5 xbmc/utils/ScraperParser.cpp
@@ -42,6 +42,7 @@ CScraperParser::CScraperParser()
m_document = NULL;
m_SearchStringEncoding = "UTF-8";
m_scraper = NULL;
+ m_isNoop = true;
}
CScraperParser::CScraperParser(const CScraperParser& parser)
@@ -50,6 +51,7 @@ CScraperParser::CScraperParser(const CScraperParser& parser)
m_document = NULL;
m_SearchStringEncoding = "UTF-8";
m_scraper = NULL;
+ m_isNoop = true;
*this = parser;
}
@@ -115,6 +117,7 @@ bool CScraperParser::LoadFromXML()
TiXmlElement* pChildElement = m_pRootElement->FirstChildElement("CreateSearchUrl");
if (pChildElement)
{
+ m_isNoop = false;
if (!(m_SearchStringEncoding = pChildElement->Attribute("SearchStringEncoding")))
m_SearchStringEncoding = "UTF-8";
}
@@ -122,12 +125,14 @@ bool CScraperParser::LoadFromXML()
pChildElement = m_pRootElement->FirstChildElement("CreateArtistSearchUrl");
if (pChildElement)
{
+ m_isNoop = false;
if (!(m_SearchStringEncoding = pChildElement->Attribute("SearchStringEncoding")))
m_SearchStringEncoding = "UTF-8";
}
pChildElement = m_pRootElement->FirstChildElement("CreateAlbumSearchUrl");
if (pChildElement)
{
+ m_isNoop = false;
if (!(m_SearchStringEncoding = pChildElement->Attribute("SearchStringEncoding")))
m_SearchStringEncoding = "UTF-8";
}
View
2  xbmc/utils/ScraperParser.h
@@ -45,6 +45,7 @@ class CScraperParser
~CScraperParser();
CScraperParser& operator= (const CScraperParser& parser);
bool Load(const CStdString& strXMLFile);
+ bool IsNoop() { return m_isNoop; };
void Clear();
const CStdString GetFilename() { return m_strFile; }
@@ -76,6 +77,7 @@ class CScraperParser
TiXmlElement* m_pRootElement;
const char* m_SearchStringEncoding;
+ bool m_isNoop;
CStdString m_strFile;
ADDON::CScraper* m_scraper;
View
2  xbmc/video/windows/GUIWindowVideoBase.cpp
@@ -511,7 +511,7 @@ bool CGUIWindowVideoBase::ShowIMDB(CFileItem *item, const ScraperPtr &info2)
if (needsRefresh)
{
bHasInfo = true;
- if (nfoResult == CNfoFile::URL_NFO || nfoResult == CNfoFile::COMBINED_NFO || nfoResult == CNfoFile::FULL_NFO)
+ if (!info->IsNoop() && (nfoResult == CNfoFile::URL_NFO || nfoResult == CNfoFile::COMBINED_NFO || nfoResult == CNfoFile::FULL_NFO))
{
if (CGUIDialogYesNo::ShowAndGetInput(13346,20446,20447,20022))
{
Something went wrong with that request. Please try again.