Skip to content

Commit

Permalink
Version 315
Browse files Browse the repository at this point in the history
  • Loading branch information
hydrusnetwork committed Jul 18, 2018
1 parent 804ffe8 commit 0646dd1
Show file tree
Hide file tree
Showing 42 changed files with 3,230 additions and 1,487 deletions.
2 changes: 1 addition & 1 deletion client.py
Original file line number Diff line number Diff line change
Expand Up @@ -135,6 +135,6 @@
f.write( traceback.format_exc() )


print( 'Critical error occured! Details written to crash.log!' )
print( 'Critical error occurred! Details written to crash.log!' )


2 changes: 1 addition & 1 deletion client.pyw
Original file line number Diff line number Diff line change
Expand Up @@ -158,6 +158,6 @@ except Exception as e:
f.write( traceback.format_exc() )


print( 'Critical error occured! Details written to crash.log!' )
print( 'Critical error occurred! Details written to crash.log!' )


37 changes: 36 additions & 1 deletion help/changelog.html
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,41 @@
<div class="content">
<h3>changelog</h3>
<ul>
<li><h3>version 315</h3></li>
<ul>
<li>got started on the big gallery update, but decided not to pull the trigger just yet. I hope to do it next week, switching the whole thing over to a two-object multi-watcher kind of deal</li>
<li>updated to wxPython 4.0.3 for all platforms</li>
<li>cleaned up some menubar replacement code, and the update to the new wxPython should also fix a "event for a menu without associated window" bug some gtk2 users were seeing on quick menubar changes</li>
<li>manage default tag import options panel now has copy/paste buttons that work on the listctrl</li>
<li>added some 'paste tag import options' safety code to make sure no one accidentally pastes a subscription or something in there, wew</li>
<li>added default checker options for subscriptions to options->downloading</li>
<li>unified how checker options are edited from their button, much like how file and tag import options work. it also has a summary tooltip on the button</li>
<li>the checker options under options->downloading are now these slimmer buttons</li>
<li>in the manual import dialog (which pops up when you drop a folder/files on the client), the files will now be added in 'human friendly' number sorting, so files of the sort 'Favourites - 10.jpg' will sort [10, 11, ..., 99, 100] rather than the purely lexicographic [10, 100, 11, ..., 99]</li>
<li>gave the migrate database dialog a pass--a bunch of misc presentation changes and a general simplification of workflow, now based more on just increase/decrease location weight</li>
<li>a bunch of texts on page management (left-hand) panels that share horizontal space with buttons should now ellipsize ("downlo...") when they get too long for the width instead of drawing in an ugly way over the buttonspace</li>
<li>moved the manage import folders dialog to the new listctrl and added a 'paused' and better 'check period' column</li>
<li>if a user tries to run a 'paused' import folder specifically from the menu, the import folder will now unpause (I will probably remove this old paused variable in the future--it isn't of much use any more)</li>
<li>tightened up some repository reset code that wasn't deleting all service tables and hence recovering from some service id malformation errors correctly</li>
<li>wrote a 'clear orphan tables' db maintenance routine that kills some spare tables some users who have previously deleted/reset repositories may have floating around</li>
<li>fixed an issue with parsing folders after hitting cancel button on the import files pre-dialog</li>
<li>if watchers encounter non-404 network errors during check, they should now just delay checking for four hours (before, they were also pausing checking completely)</li>
<li>if watchers are in 'delay' mode, they'll also not work on files.</li>
<li>file and gallery downloads that hit a 403 (Forbidden) will now present a simpler error status, like they do for 404</li>
<li>the new post downloader will no longer fail if one of the parsed source urls is not a url. the borked string will also not be associated as a url</li>
<li>regular gallery downloads now override bandwidth for the file download step, which is almost always the second half of a pair of post_url/file downloads, just to keep things in sync in edge cases</li>
<li>cleaned up some timestamp generation and 'overriding in x seconds' strings to be more human friendly</li>
<li>improved some serverside file parse error handling to propagate the actual error description up to the client a bit better</li>
<li>fixed typo causing incorrect num_ignored count in file import status button right-click menu</li>
<li>parseexceptions will now present more data about which page and content parser caused the problem. I am not totally happy about how this solution works and may revisit it</li>
<li>the lz4 import error catching is now more broad to catch some odd problem I discovered in new Linux build environment</li>
<li>the moebooru parser now fetches the original png of an image, if available</li>
<li>added a new tumblr parser that also gets post tags--it _shouldn't_ be the default</li>
<li>the new login pipeline now kicks in for the legacy logins--pixiv and hentai foundry--on a per-url basis, so adding pixiv/hf urls to the url downloader will trigger a login even if needed (previously, this was tied to legacy gallery initialisation, which explains some pixiv 'missing' login stuff some users and I were having trouble with)</li>
<li>if the legacy login system fails in the new pipeline, it now sets a flag and won't try again that client boot</li>
<li>the old 'default tag import options' panel is now completely removed from options->importing. please check 'network->downloaders->manage default tag import options' for the new url-based settings</li>
<li>misc fixes</li>
</ul>
<li><h3>version 314</h3></li>
<ul>
<li>tag import options can now be set to 'default', meaning 'use whatever the default is at the time of import', which will be an easier way of managing TIOs for many subs that you'd prefer all share the same TIO settings anyway</li>
Expand Down Expand Up @@ -37,7 +72,7 @@ <h3>changelog</h3>
<li>added a 'show the D on short file import summaries' checkbox to options->downloading--it defaults to off</li>
<li>the 'I' on short file import summaries is now 'Ig' to clear up 1/I confusion</li>
<li>added 'copy queries' to the edit subscription panel, which lets you copy all the selected queries' search texts to clipboard, newline separated</li>
<li>added a checbox to options->gui that commands 'last session' only be autosaved during idle time. this is useful if you usually have a huge (200k+ file) session and your client is always on</li>
<li>added a checkbox to options->gui that commands 'last session' only be autosaved during idle time. this is useful if you usually have a huge (200k+ file) session and your client is always on</li>
<li>fixed file import status button right-click, which I messed up somehow last week with the 'retry ignored' add</li>
<li>shook up and collapsed the network menu into neater categories</li>
<li>tightened-up the rarely used pre-parsing conversion panel on the edit page parser panel to just a button with a bit of explaining text</li>
Expand Down
2 changes: 1 addition & 1 deletion help/tagging_schema.html
Original file line number Diff line number Diff line change
Expand Up @@ -51,7 +51,7 @@ <h3>acronyms and synonyms</h3>
<h3>character:anna (frozen)</h3>
<p>I am not fond of putting a series name after a character because it looks unusual and is applied unreliably. It is done to separate same-named characters from each other (particularly when they have no canon surname), which is useful in places that search slowly or usually only deal in single-tag searches. I would prefer that namespaces say their namespace and nothing else. Some sites even say things like 'anna (disney)'. I don't really mind this stuff, but if you are adding a sibling to collapse these divergent tags into the 'proper' one, I'd prefer it all went to the simple and reliable 'character:anna'. Even better would be migrating towards a canon-ok unique name, like 'character:princess anna of arendelle'.</p>
<p>Including nicknames, like 'character:angela "mercy" ziegler' can be useful to establish uniqueness, but are not mandatory. 'character:harleen "harley quinn" frances quinzel' is probably overboard.</p>
<h3>protip: reign in your spergitude</h3>
<h3>protip: rein in your spergitude</h3>
<p>In developing hydrus, I have discovered two rules to happy tagging:</p>
<ol>
<li>Don't try to be perfect.</li>
Expand Down
12 changes: 6 additions & 6 deletions include/ClientCaches.py
Original file line number Diff line number Diff line change
Expand Up @@ -214,7 +214,7 @@ def __init__( self, controller ):

self._prefixes_to_locations = {}

self._bad_error_occured = False
self._bad_error_occurred = False
self._missing_locations = set()

self._Reinit()
Expand Down Expand Up @@ -625,7 +625,7 @@ def _Reinit( self ):

if len( self._missing_locations ) > 0:

self._bad_error_occured = True
self._bad_error_occurred = True

#

Expand Down Expand Up @@ -1055,9 +1055,9 @@ def GetFullSizeThumbnailPath( self, hash, mime = None ):

self._GenerateFullSizeThumbnail( hash, mime )

if not self._bad_error_occured:
if not self._bad_error_occurred:

self._bad_error_occured = True
self._bad_error_occurred = True

HydrusData.ShowText( 'A thumbnail for a file, ' + hash.encode( 'hex' ) + ', was missing. It has been regenerated from the original file, but this event could indicate hard drive corruption. Please check everything is ok. This error may be occuring for many files, but this message will only display once per boot. If you are recovering from a fractured database, you may wish to run \'database->regenerate->all thumbnails\'.' )

Expand Down Expand Up @@ -1093,9 +1093,9 @@ def Rebalance( self, job_key ):

try:

if self._bad_error_occured:
if self._bad_error_occurred:

wx.MessageBox( 'A serious file error has previously occured during this session, so further file moving will not be reattempted. Please restart the client before trying again.' )
wx.MessageBox( 'A serious file error has previously occurred during this session, so further file moving will not be reattempted. Please restart the client before trying again.' )

return

Expand Down
2 changes: 1 addition & 1 deletion include/ClientController.py
Original file line number Diff line number Diff line change
Expand Up @@ -1254,7 +1254,7 @@ def THREADBootEverything( self ):

except Exception as e:

text = 'A serious error occured while trying to start the program. The error will be shown next in a window. More information may have been written to client.log.'
text = 'A serious error occurred while trying to start the program. The error will be shown next in a window. More information may have been written to client.log.'

HydrusData.DebugPrint( 'If the db crashed, another error may be written just above ^.' )
HydrusData.DebugPrint( text )
Expand Down
94 changes: 90 additions & 4 deletions include/ClientDB.py
Original file line number Diff line number Diff line change
Expand Up @@ -2756,6 +2756,49 @@ def _ClearOrphanFileRecords( self ):



def _ClearOrphanTables( self ):

service_ids = self._STL( self._c.execute( 'SELECT service_id FROM services;' ) )

table_prefixes = []

table_prefixes.append( 'repository_hash_id_map_' )
table_prefixes.append( 'repository_tag_id_map_' )
table_prefixes.append( 'repository_updates_' )

good_table_names = set()

for service_id in service_ids:

suffix = str( service_id )

for table_prefix in table_prefixes:

good_table_names.add( table_prefix + suffix )



existing_table_names = set()

existing_table_names.update( self._STS( self._c.execute( 'SELECT name FROM sqlite_master WHERE type = ?;', ( 'table', ) ) ) )
existing_table_names.update( self._STS( self._c.execute( 'SELECT name FROM external_master.sqlite_master WHERE type = ?;', ( 'table', ) ) ) )

existing_table_names = { name for name in existing_table_names if True in ( name.startswith( table_prefix ) for table_prefix in table_prefixes ) }

surplus_table_names = existing_table_names.difference( good_table_names )

surplus_table_names = list( surplus_table_names )

surplus_table_names.sort()

for table_name in surplus_table_names:

HydrusData.ShowText( 'Dropping ' + table_name )

self._c.execute( 'DROP table ' + table_name + ';' )



def _CreateDB( self ):

client_files_default = os.path.join( self._db_dir, 'client_files' )
Expand Down Expand Up @@ -3192,6 +3235,18 @@ def _DeleteService( self, service_id ):

self._c.execute( 'DELETE FROM remote_thumbnails WHERE service_id = ?;', ( service_id, ) )

if service_type in HC.REPOSITORIES:

repository_updates_table_name = GenerateRepositoryRepositoryUpdatesTableName( service_id )

self._c.execute( 'DROP TABLE ' + repository_updates_table_name + ';' )

( hash_id_map_table_name, tag_id_map_table_name ) = GenerateRepositoryMasterCacheTableNames( service_id )

self._c.execute( 'DROP TABLE ' + hash_id_map_table_name + ';' )
self._c.execute( 'DROP TABLE ' + tag_id_map_table_name + ';' )


if service_type in HC.TAG_SERVICES:

( current_mappings_table_name, deleted_mappings_table_name, pending_mappings_table_name, petitioned_mappings_table_name ) = GenerateMappingsTableNames( service_id )
Expand Down Expand Up @@ -5309,7 +5364,7 @@ def _GetHashIdStatus( self, hash_id, prefix = '' ):

( timestamp, ) = result

note = 'Currently in trash. Sent there at ' + HydrusData.ConvertTimestampToPrettyTime( timestamp ) + ', which was ' + HydrusData.TimestampToPrettyTimeDelta( timestamp ) + ' (before this check).'
note = 'Currently in trash. Sent there at ' + HydrusData.ConvertTimestampToPrettyTime( timestamp ) + ', which was ' + HydrusData.TimestampToPrettyTimeDelta( timestamp, just_now_threshold = 0 ) + ' (before this check).'

return ( CC.STATUS_DELETED, hash, prefix + ': ' + note )

Expand All @@ -5320,7 +5375,7 @@ def _GetHashIdStatus( self, hash_id, prefix = '' ):

( timestamp, ) = result

note = 'Imported at ' + HydrusData.ConvertTimestampToPrettyTime( timestamp ) + ', which was ' + HydrusData.TimestampToPrettyTimeDelta( timestamp ) + ' (before this check).'
note = 'Imported at ' + HydrusData.ConvertTimestampToPrettyTime( timestamp ) + ', which was ' + HydrusData.TimestampToPrettyTimeDelta( timestamp, just_now_threshold = 0 ) + ' (before this check).'

return ( CC.STATUS_SUCCESSFUL_BUT_REDUNDANT, hash, prefix + ': ' + note )

Expand Down Expand Up @@ -6739,7 +6794,7 @@ def _GetTrashHashes( self, limit = None, minimum_age = None ):

if minimum_age is not None:

message += ' with minimum age ' + HydrusData.TimestampToPrettyTimeDelta( timestamp_cutoff ) + ','
message += ' with minimum age ' + HydrusData.TimestampToPrettyTimeDelta( timestamp_cutoff, just_now_threshold = 0 ) + ','


message += ' I found ' + HydrusData.ToHumanInt( len( hash_ids ) ) + '.'
Expand Down Expand Up @@ -7125,7 +7180,7 @@ def _LoadIntoDiskCache( self, stop_time = None, caller_limit = None ):

if HydrusData.TimeHasPassed( next_stop_time_presentation ):

HG.client_controller.pub( 'splash_set_status_subtext', 'cached ' + HydrusData.TimestampToPrettyTimeDelta( stop_time ) )
HG.client_controller.pub( 'splash_set_status_subtext', 'cached ' + HydrusData.TimestampToPrettyTimeDelta( stop_time, just_now_string = 'ok', just_now_threshold = 1 ) )

if HydrusData.TimeHasPassed( stop_time ):

Expand Down Expand Up @@ -10585,6 +10640,36 @@ def get_url_id( url ):
self.pub_initial_message( message )


if version == 314:

try:

domain_manager = self._GetJSONDump( HydrusSerialisable.SERIALISABLE_TYPE_NETWORK_DOMAIN_MANAGER )

domain_manager.Initialise()

#

domain_manager.OverwriteDefaultParsers( ( 'moebooru file page parser', 'tumblr api post page parser - with post tags' ) )

#

domain_manager.TryToLinkURLMatchesAndParsers()

#

self._SetJSONDump( domain_manager )

except Exception as e:

HydrusData.PrintException( e )

message = 'Trying to update some url classes and parsers failed! Please let hydrus dev know!'

self.pub_initial_message( message )



self._controller.pub( 'splash_set_title_text', 'updated db to v' + str( version + 1 ) )

self._c.execute( 'UPDATE version SET version = ?;', ( version + 1, ) )
Expand Down Expand Up @@ -11150,6 +11235,7 @@ def _Write( self, action, *args, **kwargs ):
elif action == 'associate_repository_update_hashes': result = self._AssociateRepositoryUpdateHashes( *args, **kwargs )
elif action == 'backup': result = self._Backup( *args, **kwargs )
elif action == 'clear_orphan_file_records': result = self._ClearOrphanFileRecords( *args, **kwargs )
elif action == 'clear_orphan_tables': result = self._ClearOrphanTables( *args, **kwargs )
elif action == 'content_updates': result = self._ProcessContentUpdates( *args, **kwargs )
elif action == 'db_integrity': result = self._CheckDBIntegrity( *args, **kwargs )
elif action == 'delete_hydrus_session_key': result = self._DeleteHydrusSessionKey( *args, **kwargs )
Expand Down
2 changes: 1 addition & 1 deletion include/ClientData.py
Original file line number Diff line number Diff line change
Expand Up @@ -410,7 +410,7 @@ def OrdIsNumber( o ):

def ReportShutdownException():

text = 'A serious error occured while trying to exit the program. Its traceback may be shown next. It should have also been written to client.log. You may need to quit the program from task manager.'
text = 'A serious error occurred while trying to exit the program. Its traceback may be shown next. It should have also been written to client.log. You may need to quit the program from task manager.'

HydrusData.DebugPrint( text )

Expand Down
5 changes: 5 additions & 0 deletions include/ClientDownloading.py
Original file line number Diff line number Diff line change
Expand Up @@ -262,6 +262,11 @@ def _FetchData( self, url, referral_url = None, temp_path = None ):

network_job = self._network_job_factory( 'GET', url, referral_url = referral_url, temp_path = temp_path )

if temp_path is not None: # i.e. it is a file after a page fetch

network_job.OverrideBandwidth( 30 )


HG.client_controller.network_engine.AddJob( network_job )

try:
Expand Down
3 changes: 3 additions & 0 deletions include/ClientFiles.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,5 @@
import gc
import HydrusData
import HydrusExceptions
import HydrusGlobals as HG
import os
Expand Down Expand Up @@ -35,5 +36,7 @@ def GetAllPaths( raw_paths ):
paths_to_process = next_paths_to_process


HydrusData.HumanTextSort( file_paths )

return file_paths

Loading

0 comments on commit 0646dd1

Please sign in to comment.