New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

qBittorrent blocked for several minutes shutting down while it writes out thousands of .fastresume files #7565

Open
CyrusNajmabadi opened this Issue Oct 10, 2017 · 30 comments

Comments

Projects
None yet
4 participants
@CyrusNajmabadi

CyrusNajmabadi commented Oct 10, 2017

I have >10k torrents in qbitorrent, with roughly 99%+ of them completed and seeding. Closing the app can easily take several minutes while my C: drive is completely saturated with qBittorrent writing out what looks to be a .fastresume file for ever torrent. This is an NVME drive. But even so, justs that many individual writes takes up a ton of time.

Furthermore, it looks like the app does this writing every few minutes as well. I was able to improve that increasing the "save resume data interval". But i'm not sure if that may have an ill effect (for example, if qbitorrent crashes).

  1. It's not clear to me why any sort of data needs to be saved for seeded, unchanging torrents. Can we eschew writing files in that case?
  2. It seems like it would be good to avoid writing out this information as one file per torrent. Even an extremely fast NVME drive is choking on that many distinct writes. Can a simple sqlite DB be used instead? I have personal experience introducing such a system to solve precisely this sort of issue. It's pretty simple and pain free, and could improve things greatly.

Thanks!

@CyrusNajmabadi

This comment has been minimized.

Show comment
Hide comment
@CyrusNajmabadi

CyrusNajmabadi Oct 10, 2017

Looks to potentially be a dupe of #5374

It would be nice to have a feature to tell qbitorrent to not perform these writes for completed files.

CyrusNajmabadi commented Oct 10, 2017

Looks to potentially be a dupe of #5374

It would be nice to have a feature to tell qbitorrent to not perform these writes for completed files.

@Chocobo1

This comment has been minimized.

Show comment
Hide comment
@Chocobo1

Chocobo1 Oct 10, 2017

Member

Closing the app can easily take several minutes while my C: drive is completely saturated with qBittorrent writing out what looks to be a .fastresume file for ever torrent.

Just to clarify, since the moment you quit qbittorrent, your drive is saturated with file writes for minutes, correct?
Just noticed the title.

There was this sqlite idea some time ago, yet was vetoed for some reason I can't remember.
Also tagging @qbittorrent/demigods @sledgehammer999 for attention.
Are there any downside for using sqlite in qbt?

Member

Chocobo1 commented Oct 10, 2017

Closing the app can easily take several minutes while my C: drive is completely saturated with qBittorrent writing out what looks to be a .fastresume file for ever torrent.

Just to clarify, since the moment you quit qbittorrent, your drive is saturated with file writes for minutes, correct?
Just noticed the title.

There was this sqlite idea some time ago, yet was vetoed for some reason I can't remember.
Also tagging @qbittorrent/demigods @sledgehammer999 for attention.
Are there any downside for using sqlite in qbt?

@sledgehammer999

This comment has been minimized.

Show comment
Hide comment
@sledgehammer999

sledgehammer999 Oct 10, 2017

Contributor

I think the first question for this report based on:

I have >10k torrents in qbitorrent, with roughly 99%+ of them completed and seeding

Why does it need to write fastresume for this kind of torrents? Is torrent_handle::need_save_resume_data() bugged?

There was this sqlite idea some time ago, yet was vetoed for some reason I can't remember.

If ever we're going to switch to a monolithic file we could use a bencoded dictionary to store each fastresume. Not an extra dependency and a full blown db.
But this has downsides too. If only 2 fastresumes out of 1k need writing, you will touch the whole file again and again, risking corruption of ALL torrents.
IMO, fixing the true bug is better.

Contributor

sledgehammer999 commented Oct 10, 2017

I think the first question for this report based on:

I have >10k torrents in qbitorrent, with roughly 99%+ of them completed and seeding

Why does it need to write fastresume for this kind of torrents? Is torrent_handle::need_save_resume_data() bugged?

There was this sqlite idea some time ago, yet was vetoed for some reason I can't remember.

If ever we're going to switch to a monolithic file we could use a bencoded dictionary to store each fastresume. Not an extra dependency and a full blown db.
But this has downsides too. If only 2 fastresumes out of 1k need writing, you will touch the whole file again and again, risking corruption of ALL torrents.
IMO, fixing the true bug is better.

@sledgehammer999

This comment has been minimized.

Show comment
Hide comment
@sledgehammer999

sledgehammer999 Oct 10, 2017

Contributor

@CyrusNajmabadi I forgot to ask. Which qbittorrent version and OS do you use?

Contributor

sledgehammer999 commented Oct 10, 2017

@CyrusNajmabadi I forgot to ask. Which qbittorrent version and OS do you use?

@glassez

This comment has been minimized.

Show comment
Hide comment
@glassez

glassez Oct 10, 2017

Member

If only 2 fastresumes out of 1k need writing, you will touch the whole file again and again, risking corruption of ALL torrents

You're confusing a database with a simple file. The DB engine does not overwrite the entire file completely, moreover, it has a number of other protective mechanisms. SQlite has ACID transactions so this gives even greater protection from inconsistency when we need store both fastresume data and torrent info at once.

Member

glassez commented Oct 10, 2017

If only 2 fastresumes out of 1k need writing, you will touch the whole file again and again, risking corruption of ALL torrents

You're confusing a database with a simple file. The DB engine does not overwrite the entire file completely, moreover, it has a number of other protective mechanisms. SQlite has ACID transactions so this gives even greater protection from inconsistency when we need store both fastresume data and torrent info at once.

@glassez

This comment has been minimized.

Show comment
Hide comment
@glassez

glassez Oct 10, 2017

Member

Why does it need to write fastresume for this kind of torrents?

AFAIK, some kind of statistics update even for seeding torrents.

Member

glassez commented Oct 10, 2017

Why does it need to write fastresume for this kind of torrents?

AFAIK, some kind of statistics update even for seeding torrents.

@CyrusNajmabadi

This comment has been minimized.

Show comment
Hide comment
@CyrusNajmabadi

CyrusNajmabadi Oct 10, 2017

I'm on 3.3.16 on win10 pro creators update.

CyrusNajmabadi commented Oct 10, 2017

I'm on 3.3.16 on win10 pro creators update.

@CyrusNajmabadi

This comment has been minimized.

Show comment
Hide comment
@CyrusNajmabadi

CyrusNajmabadi Oct 10, 2017

AFAIK, some kind of statistics update even for seeding torrents.

Could we have an option to disable that? I don't care about statistics like this in the client. Thanks!

CyrusNajmabadi commented Oct 10, 2017

AFAIK, some kind of statistics update even for seeding torrents.

Could we have an option to disable that? I don't care about statistics like this in the client. Thanks!

@CyrusNajmabadi

This comment has been minimized.

Show comment
Hide comment
@CyrusNajmabadi

CyrusNajmabadi Oct 10, 2017

If ever we're going to switch to a monolithic file we could use a bencoded dictionary to store each fastresume. Not an extra dependency and a full blown db.
But this has downsides too. If only 2 fastresumes out of 1k need writing, you will touch the whole file again and again, risking corruption of ALL torrents.

I don't think this is an accurate comparison of the options. You are correct that going with a single file that you manage yourself is extremely fragile and could lead to pretty significant problems. I would definitely not recommend doing that. There are tons of issues that can arise when you go that route, and doing something basic like "write these contents out reliably" is actually surprising difficult.**

However, that's not the same situation with a DB. Especially not a DB like sqlite that has been battle hardened with billions (or more) hours in the field. With the DB you actually vastly reduce your chance of issues, even over the existing "write data per torrent to a single file" concept you have today. The DB provides ACID guarantees that ensure that what you have is what you want, even in the most pathological cases. The only downside would be if you had true FS corruption (like disk errors) in the sectors the DB is located on.

Tons of project uses DB in general (and sqlite in specific) to provide a faster, and more reliable (and conceptually simpler) mechanism for saving data like this.

--

Given you already have a model that maps from some sort of torrent-id to a file with data, the translation to sql/sqlite would be trivial. Just have a single table that is keyed in a similar manner and which maps to the bytes you normally store in the file. If you wanted, you could get fancier and have actual columns so you could query for specific data you neeeded. But, going the simple route would mean a drop in replacement for what you're doing now.

Then, when you needed to write out thousands of pieces of information, you could just start a single transaction, write all your records, and commit. This will boil down to orders of magnitude less writes than what you're doing currently. If you're concerned about something going bad (though that would be extremely strange), you could just batch at some size (like 100 torrents) and still go way down.

--

** Even the "write to a temp file, flush to disk, then move the temp file to the final location." approach isn't guaranteed to always leave you in a good state. On many FSs, the metadata write of the "move the temp file" may not be atomic, and you may have the write happen, without all the right information being written saying where the data is that file points to. That's why DBs are really good for this sort of thing. They have spent literally thousands of time more effort trying to actually preserve ACID semantics.

CyrusNajmabadi commented Oct 10, 2017

If ever we're going to switch to a monolithic file we could use a bencoded dictionary to store each fastresume. Not an extra dependency and a full blown db.
But this has downsides too. If only 2 fastresumes out of 1k need writing, you will touch the whole file again and again, risking corruption of ALL torrents.

I don't think this is an accurate comparison of the options. You are correct that going with a single file that you manage yourself is extremely fragile and could lead to pretty significant problems. I would definitely not recommend doing that. There are tons of issues that can arise when you go that route, and doing something basic like "write these contents out reliably" is actually surprising difficult.**

However, that's not the same situation with a DB. Especially not a DB like sqlite that has been battle hardened with billions (or more) hours in the field. With the DB you actually vastly reduce your chance of issues, even over the existing "write data per torrent to a single file" concept you have today. The DB provides ACID guarantees that ensure that what you have is what you want, even in the most pathological cases. The only downside would be if you had true FS corruption (like disk errors) in the sectors the DB is located on.

Tons of project uses DB in general (and sqlite in specific) to provide a faster, and more reliable (and conceptually simpler) mechanism for saving data like this.

--

Given you already have a model that maps from some sort of torrent-id to a file with data, the translation to sql/sqlite would be trivial. Just have a single table that is keyed in a similar manner and which maps to the bytes you normally store in the file. If you wanted, you could get fancier and have actual columns so you could query for specific data you neeeded. But, going the simple route would mean a drop in replacement for what you're doing now.

Then, when you needed to write out thousands of pieces of information, you could just start a single transaction, write all your records, and commit. This will boil down to orders of magnitude less writes than what you're doing currently. If you're concerned about something going bad (though that would be extremely strange), you could just batch at some size (like 100 torrents) and still go way down.

--

** Even the "write to a temp file, flush to disk, then move the temp file to the final location." approach isn't guaranteed to always leave you in a good state. On many FSs, the metadata write of the "move the temp file" may not be atomic, and you may have the write happen, without all the right information being written saying where the data is that file points to. That's why DBs are really good for this sort of thing. They have spent literally thousands of time more effort trying to actually preserve ACID semantics.

@CyrusNajmabadi

This comment has been minimized.

Show comment
Hide comment
@CyrusNajmabadi

CyrusNajmabadi Oct 10, 2017

Note: i don't want to stir up a hornets nest. If the project really doesn't want to use a DB, that's your call. :) However, regardless of the DB choice, it would be nice to address the really poor scaling performance. Either by:

  1. Not writing fastresume data for completed files, ever.
  2. Having an option to opt out of this behavior.
  3. Some other mechanism i haven't thought of :)

I love qbitorrent, but this part is definitely a PITA.

Also, I would be happy to try contributing a PR to supply '1' or '2' if people think that would be ok. Let me know, and I'll see if i can figure out how to get properly enlisted and building on ubuntu. If I go down the '2' route, having pointers to where the advanced options code is, as well as where we should check this, woudl be great. Thanks!

CyrusNajmabadi commented Oct 10, 2017

Note: i don't want to stir up a hornets nest. If the project really doesn't want to use a DB, that's your call. :) However, regardless of the DB choice, it would be nice to address the really poor scaling performance. Either by:

  1. Not writing fastresume data for completed files, ever.
  2. Having an option to opt out of this behavior.
  3. Some other mechanism i haven't thought of :)

I love qbitorrent, but this part is definitely a PITA.

Also, I would be happy to try contributing a PR to supply '1' or '2' if people think that would be ok. Let me know, and I'll see if i can figure out how to get properly enlisted and building on ubuntu. If I go down the '2' route, having pointers to where the advanced options code is, as well as where we should check this, woudl be great. Thanks!

@CyrusNajmabadi

This comment has been minimized.

Show comment
Hide comment
@CyrusNajmabadi

CyrusNajmabadi Oct 10, 2017

Ok. I'm trying to get things building on ubuntu using the instructions here: https://github.com/qbittorrent/qBittorrent/wiki/Compiling-qBittorrent-on-Debian-and-Ubuntu

However, when i get to this step: ./configure --prefix=/usr i end up bottoming out with:

configure: error: Package requirements (Qt5Svg >= 5.5.1) were not met:

No package 'Qt5Svg' found

Any suggestions on how to address this?

CyrusNajmabadi commented Oct 10, 2017

Ok. I'm trying to get things building on ubuntu using the instructions here: https://github.com/qbittorrent/qBittorrent/wiki/Compiling-qBittorrent-on-Debian-and-Ubuntu

However, when i get to this step: ./configure --prefix=/usr i end up bottoming out with:

configure: error: Package requirements (Qt5Svg >= 5.5.1) were not met:

No package 'Qt5Svg' found

Any suggestions on how to address this?

@CyrusNajmabadi

This comment has been minimized.

Show comment
Hide comment
@CyrusNajmabadi

CyrusNajmabadi Oct 11, 2017

I'm also just trying to build without the gui. This gets further but fails at:

linking qbittorrent-nox
filterparserthread.o: In function `(anonymous namespace)::parseIPAddress(char const*, boost::asio::ip::address&) [clone .constprop.105]':
/home/cyrus/go/src/github.com/qBittorrent/qBittorrent/src/base/bittorrent/private/filterparserthread.cpp:97:
 undefined reference to `boost::asio::ip::address_v4::address_v4(std::array<unsigned char, 4ul> const&)'

any ides?

CyrusNajmabadi commented Oct 11, 2017

I'm also just trying to build without the gui. This gets further but fails at:

linking qbittorrent-nox
filterparserthread.o: In function `(anonymous namespace)::parseIPAddress(char const*, boost::asio::ip::address&) [clone .constprop.105]':
/home/cyrus/go/src/github.com/qBittorrent/qBittorrent/src/base/bittorrent/private/filterparserthread.cpp:97:
 undefined reference to `boost::asio::ip::address_v4::address_v4(std::array<unsigned char, 4ul> const&)'

any ides?

@sledgehammer999

This comment has been minimized.

Show comment
Hide comment
@sledgehammer999

sledgehammer999 Oct 11, 2017

Contributor

@ ALL the disadvantage I mentioned was meant for my proposal for a monolithic bencoded file. Not to the DB.

I don't know what I said in the past, but now I can't find any technical reason to prefer current way or sqlite. Only 2 things come to mind which aren't technical.

  • Increased binary size. ->Negligible. The increase should be below 1MiB IIRC.
  • Loss of easiness for debugging or repairing.
    • There were bugs here where we discovered an underlying libtorrent bug after examining a user provided fastresume. In this kind of situations I don't know if users would like to share their whole fastresume db.
    • Even if users send us their DB it will be harder to extract data and examine it. -> This can be solved by us, writing a small tool that exports all entries to separate fastresumes.
    • Occasionally there's a malformed torrent out there which causes a crash upon start in libtorrent. A user just goes and deletes torrent+fastresume until it works.

But first, IMO, we should investigate on how we can limit the fastresume saving for completed torrents.
Maybe an option like @CyrusNajmabadi suggested?

Seeing your PR, I assume you fixed the compilation issues, right?

Contributor

sledgehammer999 commented Oct 11, 2017

@ ALL the disadvantage I mentioned was meant for my proposal for a monolithic bencoded file. Not to the DB.

I don't know what I said in the past, but now I can't find any technical reason to prefer current way or sqlite. Only 2 things come to mind which aren't technical.

  • Increased binary size. ->Negligible. The increase should be below 1MiB IIRC.
  • Loss of easiness for debugging or repairing.
    • There were bugs here where we discovered an underlying libtorrent bug after examining a user provided fastresume. In this kind of situations I don't know if users would like to share their whole fastresume db.
    • Even if users send us their DB it will be harder to extract data and examine it. -> This can be solved by us, writing a small tool that exports all entries to separate fastresumes.
    • Occasionally there's a malformed torrent out there which causes a crash upon start in libtorrent. A user just goes and deletes torrent+fastresume until it works.

But first, IMO, we should investigate on how we can limit the fastresume saving for completed torrents.
Maybe an option like @CyrusNajmabadi suggested?

Seeing your PR, I assume you fixed the compilation issues, right?

@CyrusNajmabadi

This comment has been minimized.

Show comment
Hide comment
@CyrusNajmabadi

CyrusNajmabadi Oct 11, 2017

Seeing your PR, I assume you fixed the compilation issues, right?

I have not fixed the compilation issues on linux (yet). I was able to make the code change and compile it. But i'm failing at the linking stage. I opened the PR (untested) to get some eyes on it to make sure i wasn't doing something abysmally dumb :)

I'm trying to get compilation working on Windows. But there are a lot of steps to go through. Right now I' mup to the qt creator step. But i won't be able to work on this until tomorrow.

CyrusNajmabadi commented Oct 11, 2017

Seeing your PR, I assume you fixed the compilation issues, right?

I have not fixed the compilation issues on linux (yet). I was able to make the code change and compile it. But i'm failing at the linking stage. I opened the PR (untested) to get some eyes on it to make sure i wasn't doing something abysmally dumb :)

I'm trying to get compilation working on Windows. But there are a lot of steps to go through. Right now I' mup to the qt creator step. But i won't be able to work on this until tomorrow.

@CyrusNajmabadi

This comment has been minimized.

Show comment
Hide comment
@CyrusNajmabadi

CyrusNajmabadi Oct 11, 2017

Note: i've looked into what it would take to move to sqlite. I would want to refactor the code first though. Currently, all the code relates to reading/writing fastresume files is not isolated into one single location. For example, there is ResumeDataSavingManager which is used for actually writing out the data. However, there is also code in session.cpp:

  1. that manually walks the folder to delete the fastresume file when a torrent is deleted.
  2. that copies the fastresume file to a new locatoin when exporting a torrent.
  3. that reads in all the data on startup
  4. that saves the initial data when a torrent is being created.

I would want to refactor things so that the actual reading/writing of data was pushed into ResumeDataSavingManager. That way there would be no knowledge from session.cpp about what the underlying storage mechanism was.

once this happened, then it would be fairly simple to change the impl to not sit on top of the FS and to instead read/write directly from a sqlite db.

Finally, we could optimize certain things by exposing bulk operations from the manager. That way, if we wanted to save a ton of data, we could batch it up and perform less sql calls.

--

From a design perspective, none of this seems hard. the major issues would be he authorign so that qbitorrent could take a dependency on sqlite. GIven how much pain i've had just trying to get the project compiling on either linux or windows, i'm somewhat dreading the idea of making it take an additional dependency.

--

and, of course, if you guys don't even want this change, then that's also fine. i get the simplicity of the FS-as-storage approach. I just wanted to give my thoughts in case opinions on this cahnge in the future. Thanks!

CyrusNajmabadi commented Oct 11, 2017

Note: i've looked into what it would take to move to sqlite. I would want to refactor the code first though. Currently, all the code relates to reading/writing fastresume files is not isolated into one single location. For example, there is ResumeDataSavingManager which is used for actually writing out the data. However, there is also code in session.cpp:

  1. that manually walks the folder to delete the fastresume file when a torrent is deleted.
  2. that copies the fastresume file to a new locatoin when exporting a torrent.
  3. that reads in all the data on startup
  4. that saves the initial data when a torrent is being created.

I would want to refactor things so that the actual reading/writing of data was pushed into ResumeDataSavingManager. That way there would be no knowledge from session.cpp about what the underlying storage mechanism was.

once this happened, then it would be fairly simple to change the impl to not sit on top of the FS and to instead read/write directly from a sqlite db.

Finally, we could optimize certain things by exposing bulk operations from the manager. That way, if we wanted to save a ton of data, we could batch it up and perform less sql calls.

--

From a design perspective, none of this seems hard. the major issues would be he authorign so that qbitorrent could take a dependency on sqlite. GIven how much pain i've had just trying to get the project compiling on either linux or windows, i'm somewhat dreading the idea of making it take an additional dependency.

--

and, of course, if you guys don't even want this change, then that's also fine. i get the simplicity of the FS-as-storage approach. I just wanted to give my thoughts in case opinions on this cahnge in the future. Thanks!

@sledgehammer999

This comment has been minimized.

Show comment
Hide comment
@sledgehammer999

sledgehammer999 Jan 3, 2018

Contributor

@glassez @Chocobo1
Finally I found this report. About SQLite. I started considering this again during the holidays. I thought that eventually it might not be such a bad idea.
Then I started poking around the docs. And I found these:

SQLite Qt docs:

QSQLITE File Format Compatibility

SQLite minor releases sometimes break file format forward compatibility. For example, SQLite 3.3 can read database files created with SQLite 3.2, but databases created with SQLite 3.3 cannot be read by SQLite 3.2. Please refer to the SQLite documentation and change logs for information about file format compatibility between versions.

And from SQLite itself

In other words, since 2004 all SQLite releases have been backwards compatible, though not necessarily forwards compatible.

Correct me if I am overreacting, but isn't the above bad for us?
What if a user returns to an older version of qbt, which uses an older db format?
Admittedly in the past we have done this too. But we warned the user about this incompatibility and gave him a choice to abort.
I don't think we can do this programmatically with SQLite. (I admit I haven't read their API docs regarding this).

Contributor

sledgehammer999 commented Jan 3, 2018

@glassez @Chocobo1
Finally I found this report. About SQLite. I started considering this again during the holidays. I thought that eventually it might not be such a bad idea.
Then I started poking around the docs. And I found these:

SQLite Qt docs:

QSQLITE File Format Compatibility

SQLite minor releases sometimes break file format forward compatibility. For example, SQLite 3.3 can read database files created with SQLite 3.2, but databases created with SQLite 3.3 cannot be read by SQLite 3.2. Please refer to the SQLite documentation and change logs for information about file format compatibility between versions.

And from SQLite itself

In other words, since 2004 all SQLite releases have been backwards compatible, though not necessarily forwards compatible.

Correct me if I am overreacting, but isn't the above bad for us?
What if a user returns to an older version of qbt, which uses an older db format?
Admittedly in the past we have done this too. But we warned the user about this incompatibility and gave him a choice to abort.
I don't think we can do this programmatically with SQLite. (I admit I haven't read their API docs regarding this).

@CyrusNajmabadi

This comment has been minimized.

Show comment
Hide comment
@CyrusNajmabadi

CyrusNajmabadi Jan 4, 2018

if you're concerned about this, then i would recommend not using QT's sqlite and just linking against a specific version of sqlite you want. Then you should have no issues here.

CyrusNajmabadi commented Jan 4, 2018

if you're concerned about this, then i would recommend not using QT's sqlite and just linking against a specific version of sqlite you want. Then you should have no issues here.

@CyrusNajmabadi

This comment has been minimized.

Show comment
Hide comment
@CyrusNajmabadi

CyrusNajmabadi Jan 4, 2018

Also note: what sqlite is describing is the situatoin with all databases since inception. i.e. if you have a sqlite version from before 2004, you may not be able to read current sqlite databases. but current sqlite can read all databases ever made.

Note that sqlite has been 100% compatible here for the past 14 years, and it's highly unlikely they'd break anything here. They've got billions of DBs out there, and preserving compatibility and not destabilizing users is a top goal for them.

CyrusNajmabadi commented Jan 4, 2018

Also note: what sqlite is describing is the situatoin with all databases since inception. i.e. if you have a sqlite version from before 2004, you may not be able to read current sqlite databases. but current sqlite can read all databases ever made.

Note that sqlite has been 100% compatible here for the past 14 years, and it's highly unlikely they'd break anything here. They've got billions of DBs out there, and preserving compatibility and not destabilizing users is a top goal for them.

@CyrusNajmabadi

This comment has been minimized.

Show comment
Hide comment
@CyrusNajmabadi

CyrusNajmabadi Jan 4, 2018

Note: sqlite also added support for write-ahead-logging (WAL) back in 2k10. If you started using that, then you could end up with a db that would not be readable by old libraries. HOWEVER, this would only happen if you (the developer) actually told the sqlite library to use WAL. If you did not, it would continue to create DBs using the previous form, which woudl still be readable by older apps.

CyrusNajmabadi commented Jan 4, 2018

Note: sqlite also added support for write-ahead-logging (WAL) back in 2k10. If you started using that, then you could end up with a db that would not be readable by old libraries. HOWEVER, this would only happen if you (the developer) actually told the sqlite library to use WAL. If you did not, it would continue to create DBs using the previous form, which woudl still be readable by older apps.

@glassez

This comment has been minimized.

Show comment
Hide comment
@glassez

glassez Jan 4, 2018

Member

Correct me if I am overreacting, but isn't the above bad for us?
What if a user returns to an older version of qbt, which uses an older db format?
Admittedly in the past we have done this too. But we warned the user about this incompatibility and gave him a choice to abort.

@sledgehammer999, from time to time I return to this issue and I'm trying to figure out how to implement it the efficient way (i.e. how to use its advantages, but to mitigate its inconvenience). Later I'm going to summarize my reasoning. Besides, I have an idea to introduce it first in a less important subsystem (eg. in RSS) and polish it there. How do you like it?

Member

glassez commented Jan 4, 2018

Correct me if I am overreacting, but isn't the above bad for us?
What if a user returns to an older version of qbt, which uses an older db format?
Admittedly in the past we have done this too. But we warned the user about this incompatibility and gave him a choice to abort.

@sledgehammer999, from time to time I return to this issue and I'm trying to figure out how to implement it the efficient way (i.e. how to use its advantages, but to mitigate its inconvenience). Later I'm going to summarize my reasoning. Besides, I have an idea to introduce it first in a less important subsystem (eg. in RSS) and polish it there. How do you like it?

@glassez

This comment has been minimized.

Show comment
Hide comment
@glassez

glassez Jan 5, 2018

Member

Also as you can see in http://www.sqlite.org/compile.html#default_file_format
we have forward compatibility since v3.7.10 (IIRC).

Member

glassez commented Jan 5, 2018

Also as you can see in http://www.sqlite.org/compile.html#default_file_format
we have forward compatibility since v3.7.10 (IIRC).

@sledgehammer999

This comment has been minimized.

Show comment
Hide comment
@sledgehammer999

sledgehammer999 Jan 17, 2018

Contributor

I had some thoughts. It is my (limited) understanding that any new features that break forward compatibility aren't applied by default. The developer needs to use/enable them explicitly. So in this case we're probably safe to proceed.

Besides, I have an idea to introduce it first in a less important subsystem (eg. in RSS) and polish it there. How do you like it?

Go ahead. That would be a great starting point IMO.
Also, consider the possibility of using sqlite API directly instead of relying on Qt wrappers. Sidenote: Consider this question: Could the Qt wrappers make use of new features breaking forward compatibility and not allowing us to control it?

Also as you can see in http://www.sqlite.org/compile.html#default_file_format
we have forward compatibility since v3.7.10 (IIRC).

That's great.

Contributor

sledgehammer999 commented Jan 17, 2018

I had some thoughts. It is my (limited) understanding that any new features that break forward compatibility aren't applied by default. The developer needs to use/enable them explicitly. So in this case we're probably safe to proceed.

Besides, I have an idea to introduce it first in a less important subsystem (eg. in RSS) and polish it there. How do you like it?

Go ahead. That would be a great starting point IMO.
Also, consider the possibility of using sqlite API directly instead of relying on Qt wrappers. Sidenote: Consider this question: Could the Qt wrappers make use of new features breaking forward compatibility and not allowing us to control it?

Also as you can see in http://www.sqlite.org/compile.html#default_file_format
we have forward compatibility since v3.7.10 (IIRC).

That's great.

@glassez

This comment has been minimized.

Show comment
Hide comment
@glassez

glassez Jan 18, 2018

Member

Also, consider the possibility of using sqlite API directly instead of relying on Qt wrappers. Sidenote: Consider this question: Could the Qt wrappers make use of new features breaking forward compatibility and not allowing us to control it?

IIRC, SQLite features are compile-time options. I don't think wrappers will create inconveniences for us. But we have to build the custom SQLite plug-in to control library features.
In any case, we need to start from something.

Member

glassez commented Jan 18, 2018

Also, consider the possibility of using sqlite API directly instead of relying on Qt wrappers. Sidenote: Consider this question: Could the Qt wrappers make use of new features breaking forward compatibility and not allowing us to control it?

IIRC, SQLite features are compile-time options. I don't think wrappers will create inconveniences for us. But we have to build the custom SQLite plug-in to control library features.
In any case, we need to start from something.

@sledgehammer999

This comment has been minimized.

Show comment
Hide comment
@sledgehammer999

sledgehammer999 Jan 18, 2018

Contributor

But we have to build the custom SQLite plug-in to control library features.

Which plugin do you mean? From Qt? By default that is included, but my builds do not compile that to keep size down. If we decide to use the Qt sqlite plugin, I'll rebuild Qt and upload it to travis too.

Contributor

sledgehammer999 commented Jan 18, 2018

But we have to build the custom SQLite plug-in to control library features.

Which plugin do you mean? From Qt? By default that is included, but my builds do not compile that to keep size down. If we decide to use the Qt sqlite plugin, I'll rebuild Qt and upload it to travis too.

@glassez

This comment has been minimized.

Show comment
Hide comment
@glassez

glassez Jan 18, 2018

Member

Which plugin do you mean? From Qt?

Yes.

By default that is included, but my builds do not compile that to keep size down.

Default plugin uses SQLite library shipped with Qt. To use our own sqlite build we need to compile plugin manually and link it against chosen sqlite binary.

Member

glassez commented Jan 18, 2018

Which plugin do you mean? From Qt?

Yes.

By default that is included, but my builds do not compile that to keep size down.

Default plugin uses SQLite library shipped with Qt. To use our own sqlite build we need to compile plugin manually and link it against chosen sqlite binary.

@sledgehammer999

This comment has been minimized.

Show comment
Hide comment
@sledgehammer999

sledgehammer999 Jan 18, 2018

Contributor

Default plugin uses SQLite library shipped with Qt. To use our own sqlite build we need to compile plugin manually and link it against chosen sqlite binary.

We can't rely always on "our own" or qt's sqlite. On Windows and macOS, where I control the releases, we can expect to have our own sqlite build. But on Linux/BSD I am pretty sure that Qt is configured to use the system's sqlite.

Contributor

sledgehammer999 commented Jan 18, 2018

Default plugin uses SQLite library shipped with Qt. To use our own sqlite build we need to compile plugin manually and link it against chosen sqlite binary.

We can't rely always on "our own" or qt's sqlite. On Windows and macOS, where I control the releases, we can expect to have our own sqlite build. But on Linux/BSD I am pretty sure that Qt is configured to use the system's sqlite.

@glassez

This comment has been minimized.

Show comment
Hide comment
@glassez

glassez Jan 18, 2018

Member

But on Linux/BSD I am pretty sure that Qt is configured to use the system's sqlite.

IMO, there is no problem on systems where qBittorrent uses system Qt/SQLite since these libraries uses its own update cycle and downgrading of qBittorrent doesn't lead to downgrading of SQLite.
We should take care about shipped libraries only (i.e. official Windows and MacOS releases).
In any case we can start with default Qt plugin.

Member

glassez commented Jan 18, 2018

But on Linux/BSD I am pretty sure that Qt is configured to use the system's sqlite.

IMO, there is no problem on systems where qBittorrent uses system Qt/SQLite since these libraries uses its own update cycle and downgrading of qBittorrent doesn't lead to downgrading of SQLite.
We should take care about shipped libraries only (i.e. official Windows and MacOS releases).
In any case we can start with default Qt plugin.

@sledgehammer999

This comment has been minimized.

Show comment
Hide comment
@sledgehammer999

sledgehammer999 Jan 18, 2018

Contributor

IIRC, SQLite features are compile-time options. I don't think wrappers will create inconveniences for us.

For example, WAL, which created an incompatibility in the past, is enabled like this. So, it depends on what we(or the plugin/wrapper) do with the new feature.
And I am curious if the qt plugin will use the default API values or enable new features by itself, thus rendering our files incompatible with older versions. ( I am talking about the future of course)

Contributor

sledgehammer999 commented Jan 18, 2018

IIRC, SQLite features are compile-time options. I don't think wrappers will create inconveniences for us.

For example, WAL, which created an incompatibility in the past, is enabled like this. So, it depends on what we(or the plugin/wrapper) do with the new feature.
And I am curious if the qt plugin will use the default API values or enable new features by itself, thus rendering our files incompatible with older versions. ( I am talking about the future of course)

@glassez

This comment has been minimized.

Show comment
Hide comment
@glassez

glassez Jan 18, 2018

Member

For example, WAL, which created an incompatibility in the past, is enabled like this. So, it depends on what we(or the plugin/wrapper) do with the new feature.

I don't think Qt wrappers use such features behind the scene. As for us, we should first declare some policy for these features (IMO, we don't need any advanced features at all).

And I am curious if the qt plugin will use the default API values or enable new features by itself, thus rendering our files incompatible with older versions. ( I am talking about the future of course)

Besides, now I realized that SQLite compile-time options set the default values and they can be modified using PRAGMA instructions during runtime.

Member

glassez commented Jan 18, 2018

For example, WAL, which created an incompatibility in the past, is enabled like this. So, it depends on what we(or the plugin/wrapper) do with the new feature.

I don't think Qt wrappers use such features behind the scene. As for us, we should first declare some policy for these features (IMO, we don't need any advanced features at all).

And I am curious if the qt plugin will use the default API values or enable new features by itself, thus rendering our files incompatible with older versions. ( I am talking about the future of course)

Besides, now I realized that SQLite compile-time options set the default values and they can be modified using PRAGMA instructions during runtime.

@sledgehammer999

This comment has been minimized.

Show comment
Hide comment
@sledgehammer999

sledgehammer999 Jan 18, 2018

Contributor

As for us, we should first declare some policy for these features

IMO, our policy should be "We don't use any new features/values implemented after qbt's 1st usage of sqlite3".

(IMO, we don't need any advanced features at all).

I agree. All we want is to store and retrieve some small sized data with reliability and preventing GUI hangs. I don't think we will face I/O bottlenecks like big websites.

Contributor

sledgehammer999 commented Jan 18, 2018

As for us, we should first declare some policy for these features

IMO, our policy should be "We don't use any new features/values implemented after qbt's 1st usage of sqlite3".

(IMO, we don't need any advanced features at all).

I agree. All we want is to store and retrieve some small sized data with reliability and preventing GUI hangs. I don't think we will face I/O bottlenecks like big websites.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment