Skip to content

HTTPS clone URL

Subversion checkout URL

You can clone with
or
.
Download ZIP

Loading…

changed: upgrade memory based cache to filebased after having been full f #86

Closed
wants to merge 1 commit into from

3 participants

@elupus
Collaborator

changed: upgrade memory based cache to filebased after having been full for 2 seconds without anything reading out of it

By starting out with a memory based cache, then replacing that with a file based one on need we avoid the rather large overhead a file based cache introduces. However it can currently not discern the difference between paused playback on a really fast network, and on a slow one. So if you play over http from local network for example and pause playback. The whole file will be downloaded to local disk at maximum network speed.

I'm hesitant to this feature. I suspect it must be possible to turn on and off. On embedded system we may not have diskspace enough to do this. And currently i don't think the file based cache handles the situation of running out of space well at all.

Comments are very welcome to how we should handle a feature like this.

@elupus elupus changed: upgrade memory based cache to filebased after having been fu…
…ll for 2 seconds without anything reading out of it

By starting out with a memory based cache, then replacing that with a file based one on need we avoid the rather large overhead a file based cache introduces. However it can currently not discern the difference between paused playback on a really fast network, and on a slow one. So if you play over http from local network for example and pause playback. The whole file will be downloaded to local disk at maximum network speed.
d633aea
@davilla
Collaborator

Seems to work quite well under OSX. No issues that I can see yet. iOS testing tonight.

@elupus
Collaborator

Nice to hear. I've realized a rather major bug with the file based cache thou which must be fixed before this goes in. If the free disk space of the local machine does not support the size of the streamed file, playback will break down.

So two things really needs to be fixed here. One verify available disk space. Define some limit as to how much free space we always must leave. Something like atleast 200 megs (computers do funny things when totally out of disk space).

Fix the SimpleFileCache to wrap around similary to how the new mem buffer does. So it doesn't break down with a limited amount of disk space.

@davilla
Collaborator

what's a good way to trigger this ? iOS is fine but I don't really see the file cache getting created.

@elupus
Collaborator

Normally it would just be pause playback on a file larger than 20megs+8video byterate, wait for the file cache to fill larger than 20+8videobyterate. (is visible in the codec info data)

@davilla
Collaborator

humm, can't seem to force it. will beat on it some more.

@ghost

hi elupus, tiben20 has also been experimenting with file caches

(for my benefit, because i made an addon that streams .avi, which was having buffering issues on slower connections)
http://forum.xbmc.org/showthread.php?t=98782 http://trac.xbmc.org/ticket/11416 (download links are in the trac ticket)

i've thought of 2 more things that could trigger it:

A) check whether stream supports skipping, if it doesn't, force file cache.
(prevents a bug on streams that don't support skipping where pausing for too long kills the stream)

B) allow addon coders to set in xbmc.player.play a cache file path (which also forces file cache)
(this lets us provide a cool Stream + Download option in addons) ---- tiben's patch does this

*PS for the streams that don't support skipping, another nice thing might be to support skipping within the already cached video, but refusing to try to skip beyond the cache limit because we know we won't be able to.

i hope my suggestions help!

@elupus
Collaborator

the current filecache logic will not guarantee a complete download thou. if a seek request is made (not neccessarily by user, could be done by demuxer) it will just maintain some part of the file. So it would most likely be corrupt if you try to reuse it.

@ghost

ah ok, AFAIK tiben's patch completely switches to use the curl download code used elsewhere in xbmc (like for downloading images) to guarantee a complete download. (if the stream is file cached from the very start, and the stream is not skipped around in)

i'll ask him to post on this discussion so he can coordinate.

apologies if i have hijacked this pull request with scope creep, maybe i should ask tiben to make a new pull request

@tiben20
Collaborator

anarchintosh elupus is right about the current seek that will corrupt the download. My test were done with megaupload free account this is why i didn't see it right away.
I'll see if i can make the seek happen on what as already been cached instead of making it on the input stream directly

@tiben20 tiben20 closed this
@tiben20 tiben20 reopened this
@elupus
Collaborator

This won't merge anymore and i'm not sure i see much point in it anymore.

@elupus elupus closed this
@ghost

i still see the point in it... pausing something to load (because it is a slow connection) and then finding it has not is very frustrating.

@ryanroth ryanroth referenced this pull request from a commit in garbear/xbmc
@garbear garbear Savegames - success b3f95f2
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Commits on Mar 28, 2011
  1. @elupus

    changed: upgrade memory based cache to filebased after having been fu…

    elupus authored
    …ll for 2 seconds without anything reading out of it
    
    By starting out with a memory based cache, then replacing that with a file based one on need we avoid the rather large overhead a file based cache introduces. However it can currently not discern the difference between paused playback on a really fast network, and on a slow one. So if you play over http from local network for example and pause playback. The whole file will be downloaded to local disk at maximum network speed.
This page is out of date. Refresh to see the latest.
View
1  xbmc/filesystem/CacheCircular.h
@@ -43,6 +43,7 @@ class CCacheCircular : public CCacheStrategy
virtual int64_t Seek(int64_t pos) ;
virtual void Reset(int64_t pos) ;
+ virtual int64_t GetCacheStart() { return m_beg; }
protected:
uint64_t m_beg; /**< index in file (not buffer) of beginning of valid data */
View
1  xbmc/filesystem/CacheMemBuffer.h
@@ -47,6 +47,7 @@ class CacheMemBuffer : public CCacheStrategy
virtual int64_t Seek(int64_t iFilePosition) ;
virtual void Reset(int64_t iSourcePosition) ;
+ virtual int64_t GetCacheStart() { return m_nStartPosition; }
protected:
int64_t m_nStartPosition;
View
3  xbmc/filesystem/CacheStrategy.h
@@ -57,6 +57,7 @@ class CCacheStrategy{
virtual void EndOfInput(); // mark the end of the input stream so that Read will know when to return EOF
virtual bool IsEndOfInput();
virtual void ClearEndOfInput();
+ virtual int64_t GetCacheStart() = 0;
CEvent m_space;
protected:
@@ -81,6 +82,8 @@ class CSimpleFileCache : public CCacheStrategy {
virtual void Reset(int64_t iSourcePosition);
virtual void EndOfInput();
+ virtual int64_t GetCacheStart() { return m_nStartPosition; }
+
int64_t GetAvailableRead();
protected:
View
71 xbmc/filesystem/FileCache.cpp
@@ -29,11 +29,13 @@
#include "threads/SingleLock.h"
#include "utils/log.h"
#include "settings/AdvancedSettings.h"
+#include "utils/TimeUtils.h"
using namespace AUTOPTR;
using namespace XFILE;
#define READ_CACHE_CHUNK_SIZE (64*1024)
+#define WRITE_CACHE_UPGRADE_TIMEOUT 2000
CFileCache::CFileCache()
{
@@ -122,6 +124,37 @@ bool CFileCache::Open(const CURL& url)
return true;
}
+static bool CopyCacheStrategy(CCacheStrategy *dst, CCacheStrategy *src)
+{
+ if(dst->Open() != CACHE_RC_OK)
+ return false;
+
+ dst->Reset(src->GetCacheStart());
+ if(src->Seek(src->GetCacheStart()) < 0)
+ return false;
+
+ const size_t data_size = 32 * 1024;
+ auto_aptr<char> data(new char[data_size]);
+
+ while(1)
+ {
+ int res_r = src->ReadFromCache(data.get(), data_size);
+ if(res_r == 0 || res_r == CACHE_RC_WOULD_BLOCK)
+ break;
+
+ if(res_r < 0)
+ return false;
+
+ for(int pos = 0, res_w = 0; pos < res_r; pos += res_w)
+ {
+ res_w = dst->WriteToCache(data.get()+pos, res_r - pos);
+ if(res_w <= 0)
+ return false;
+ }
+ }
+ return true;
+}
+
void CFileCache::Process()
{
if (!m_pCache) {
@@ -181,6 +214,7 @@ void CFileCache::Process()
m_bStop = true;
int iTotalWrite=0;
+ unsigned iTimeBlocked = 0;
while (!m_bStop && (iTotalWrite < iRead))
{
int iWrite = 0;
@@ -195,9 +229,40 @@ void CFileCache::Process()
break;
}
else if (iWrite == 0)
- m_pCache->m_space.WaitMSec(5);
-
- iTotalWrite += iWrite;
+ {
+ if(!m_pCache->m_space.WaitMSec(5))
+ {
+ if(iTimeBlocked == 0)
+ iTimeBlocked = CTimeUtils::GetTimeMS();
+
+ if(iTimeBlocked + WRITE_CACHE_UPGRADE_TIMEOUT < CTimeUtils::GetTimeMS() && typeid(*m_pCache) != typeid(CSimpleFileCache) )
+ {
+ iTimeBlocked = 0;
+
+ CSingleLock lock(m_sync);
+ CCacheStrategy *cache = new CSimpleFileCache();
+ if(CopyCacheStrategy(cache, m_pCache))
+ {
+ CLog::Log(LOGDEBUG, "CFileCache::Process - dumped memory cache to file");
+ delete m_pCache;
+ m_pCache = cache;
+ }
+ else
+ {
+ CLog::Log(LOGERROR, "CFileCache::Process - failed to dump memory cache to file");
+ delete cache;
+ }
+ /* restore correct position */
+ if(m_pCache->Seek(m_readPos) < 0)
+ CLog::Log(LOGERROR, "CFileCache::Process - failed to restore file position");
+ }
+ }
+ }
+ else
+ {
+ iTimeBlocked = 0;
+ iTotalWrite += iWrite;
+ }
// check if seek was asked. otherwise if cache is full we'll freeze.
if (m_seekEvent.WaitMSec(0))
Something went wrong with that request. Please try again.