Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[core] consider offline links as finished #4394

Open
wants to merge 1 commit into
base: develop
Choose a base branch
from

Conversation

mihawk90
Copy link
Contributor

@mihawk90 mihawk90 commented Nov 15, 2023

Describe the changes

This change considers (permanently) offline links as finished. Although they are not technically finished in being downloaded, they also never will be so there is no reason not to dispatch package_finished. This allows plugins and scripts to further process the files.

Note that this does not adjust the progress display in queue, allowing users to check why their progress never finishes and decide what to do with the package or links in question.

Is this related to a problem?

Fixes #4392

Additional references

Tested using links:

online: https://raw.githubusercontent.com/pyload/pyload/main/README.md
offline: https://ddownload.com/bjdx2facqp3r

Fun fact: I tested with https://raw.githubusercontent.com/pyload/pyload/main/README_NOT.md and while that does return a 404 in the header, it's not considered offline by the DefaultPlugin, so it just downloads a text file with the error message.

[2023-11-15 22:42:50]  INFO                pyload  Added package dev-finish-offline-links containing 2 links
127.0.0.1 - - [15/Nov/2023 22:42:50] "POST /json/add_package HTTP/1.1" 200 -
...
[2023-11-15 22:42:51]  INFO                pyload  Download starts: README.md
[2023-11-15 22:42:51]  DEBUG               pyload  ADDON ExternalScripts: No script found under folder `download_preparing`
[2023-11-15 22:42:51]  DEBUG               pyload  ADDON UserAgentSwitcher: Setting connection timeout to 60 seconds
[2023-11-15 22:42:51]  DEBUG               pyload  ADDON UserAgentSwitcher: Setting maximum redirections to 10
[2023-11-15 22:42:51]  DEBUG               pyload  ADDON UserAgentSwitcher: Use custom user-agent string `Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:87.0) Gecko/20100101 Firefox/87.0`
[2023-11-15 22:42:51]  DEBUG               pyload  DOWNLOADER DefaultPlugin[1]: Plugin version: 0.52
[2023-11-15 22:42:51]  DEBUG               pyload  DOWNLOADER DefaultPlugin[1]: Plugin status: testing
[2023-11-15 22:42:51]  WARNING             pyload  DOWNLOADER DefaultPlugin[1]: Plugin may be unstable
[2023-11-15 22:42:51]  DEBUG               pyload  DOWNLOADER DefaultPlugin[1]: Config option `use_premium` not found, use default `True`
[2023-11-15 22:42:51]  INFO                pyload  DOWNLOADER DefaultPlugin[1]: Grabbing link info...
[2023-11-15 22:42:51]  DEBUG               pyload  DOWNLOADER DefaultPlugin[1]: Link info: {'name': 'README.md', 'hash': {}, 'pattern': {}, 'size': 0, 'status': 7, 'url': 'https://raw.githubusercontent.com/pyload/pyload/main/README.md'}
[2023-11-15 22:42:51]  DEBUG               pyload  DOWNLOADER DefaultPlugin[1]: Previous link info: {}
[2023-11-15 22:42:51]  INFO                pyload  DOWNLOADER DefaultPlugin[1]: Link name: README.md
[2023-11-15 22:42:51]  INFO                pyload  DOWNLOADER DefaultPlugin[1]: Link size: N/D
[2023-11-15 22:42:51]  INFO                pyload  DOWNLOADER DefaultPlugin[1]: Link status: starting
[2023-11-15 22:42:51]  INFO                pyload  DOWNLOADER DefaultPlugin[1]: Processing url: https://raw.githubusercontent.com/pyload/pyload/main/README.md
[2023-11-15 22:42:51]  DEBUG               pyload  DOWNLOADER DefaultPlugin[1]: DOWNLOAD URL https://raw.githubusercontent.com/pyload/pyload/main/README.md | get={} | post={} | ref=False | cookies=True | disposition=True | resume=None | chunks=None
[2023-11-15 22:42:51]  DEBUG               pyload  Chunk 1 chunked with range 0-3515
[2023-11-15 22:42:51]  DEBUG               pyload  Chunk 2 chunked with range 3515-7030
[2023-11-15 22:42:51]  DEBUG               pyload  Chunk 3 chunked with range 7030-
[2023-11-15 22:42:52]  DEBUG               pyload  Chunk 1 download finished
[2023-11-15 22:42:52]  DEBUG               pyload  Chunk 2 download finished
[2023-11-15 22:42:52]  DEBUG               pyload  Chunk 3 download finished
[2023-11-15 22:42:52]  INFO                pyload  DOWNLOADER DefaultPlugin[1]: File saved
[2023-11-15 22:42:52]  INFO                pyload  DOWNLOADER DefaultPlugin[1]: Checking download...
[2023-11-15 22:42:52]  INFO                pyload  DOWNLOADER DefaultPlugin[1]: File is OK
[2023-11-15 22:42:52]  DEBUG               pyload  ADDON ExternalScripts: No script found under folder `download_processed`
[2023-11-15 22:42:52]  INFO                pyload  Download finished: README (1).md
[2023-11-15 22:42:52]  DEBUG               pyload  ADDON ExternalScripts: No script found under folder `download_finished`
[2023-11-15 22:42:52]  INFO                pyload  Download starts: bjdx2facqp3r
[2023-11-15 22:42:52]  DEBUG               pyload  ADDON ExternalScripts: No script found under folder `download_preparing`
[2023-11-15 22:42:52]  DEBUG               pyload  ADDON UserAgentSwitcher: Setting connection timeout to 60 seconds
[2023-11-15 22:42:52]  DEBUG               pyload  ADDON UserAgentSwitcher: Setting maximum redirections to 10
[2023-11-15 22:42:52]  DEBUG               pyload  ADDON UserAgentSwitcher: Use custom user-agent string `Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:87.0) Gecko/20100101 Firefox/87.0`
[2023-11-15 22:42:52]  DEBUG               pyload  DOWNLOADER DdownloadCom[2]: Plugin version: 0.11
[2023-11-15 22:42:52]  DEBUG               pyload  DOWNLOADER DdownloadCom[2]: Plugin status: testing
[2023-11-15 22:42:52]  WARNING             pyload  DOWNLOADER DdownloadCom[2]: Plugin may be unstable
[2023-11-15 22:42:52]  INFO                pyload  DOWNLOADER DdownloadCom[2]: Processing url: https://ddownload.com/bjdx2facqp3r
[2023-11-15 22:42:52]  DEBUG               pyload  DOWNLOADER DdownloadCom[2]: LOAD URL https://ddownload.com/bjdx2facqp3r | get={} | post={} | ref=False | cookies=True | just_header=False | decode=True | multipart=False | redirect=True | req=None
[2023-11-15 22:42:53]  INFO                pyload  DOWNLOADER DdownloadCom[2]: Checking for link errors...
[2023-11-15 22:42:53]  INFO                pyload  DOWNLOADER DdownloadCom[2]: No errors found
[2023-11-15 22:42:53]  INFO                pyload  DOWNLOADER DdownloadCom[2]: Grabbing link info...
[2023-11-15 22:42:53]  DEBUG               pyload  DOWNLOADER DdownloadCom[2]: Link info: {'name': 'bjdx2facqp3r', 'hash': {}, 'pattern': {'ID': 'bjdx2facqp3r'}, 'size': 0, 'status': 1, 'url': 'https://ddownload.com/bjdx2facqp3r'}
[2023-11-15 22:42:53]  DEBUG               pyload  DOWNLOADER DdownloadCom[2]: Previous link info: {}
[2023-11-15 22:42:53]  INFO                pyload  DOWNLOADER DdownloadCom[2]: Link name: bjdx2facqp3r
[2023-11-15 22:42:53]  INFO                pyload  DOWNLOADER DdownloadCom[2]: Link size: N/D
[2023-11-15 22:42:53]  INFO                pyload  DOWNLOADER DdownloadCom[2]: Link status: offline
[2023-11-15 22:42:53]  WARNING             pyload  DOWNLOADER DdownloadCom[2]: Free download failed | offline
[2023-11-15 22:42:53]  DEBUG               pyload  ADDON ExternalScripts: No script found under folder `download_processed`
[2023-11-15 22:42:53]  DEBUG               pyload  ADDON ExternalScripts: No script found under folder `package_processed`
[2023-11-15 22:42:53]  DEBUG               pyload  ADDON ExternalScripts: No script found under folder `package_failed`
[2023-11-15 22:42:53]  WARNING             pyload  Download is offline: bjdx2facqp3r
>>> [2023-11-15 22:42:53]  INFO                pyload  Package finished: dev-finish-offline-links
[2023-11-15 22:42:53]  DEBUG               pyload  ADDON ExternalScripts: No script found under folder `package_finished`
[2023-11-15 22:42:53]  DEBUG               pyload  ADDON ExternalScripts: No script found under folder `download_failed`
[2023-11-15 22:42:53]  INFO                pyload  ADDON UnSkipOnFail: Looking for skipped duplicates of: bjdx2facqp3r (pid:1)
[2023-11-15 22:42:53]  INFO                pyload  ADDON UnSkipOnFail: No duplicates found
[2023-11-15 22:42:53]  DEBUG               pyload  ADDON ExternalScripts: No script found under folder `all_downloads_processed`
[2023-11-15 22:42:53]  DEBUG               pyload  All downloads processed

I didn't find any other call to get_unfinished so I don't think this is going to have any impact elsewhere.

@CLAassistant
Copy link

CLAassistant commented Nov 15, 2023

CLA assistant check
All committers have signed the CLA.

@pep8speaks
Copy link

pep8speaks commented Nov 15, 2023

Hello @mihawk90! Thanks for updating this PR. We checked the lines you've touched for PEP 8 issues, and found:

There are currently no PEP 8 issues detected in this Pull Request. Cheers!We recommend to use black to automatically correct these files.

Comment last updated at 2023-11-18 17:34:05 UTC

@mihawk90
Copy link
Contributor Author

line too long (91 > 88 characters)

Does that really need fixing? I guess I could remove the spaces :P

@mihawk90
Copy link
Contributor Author

Fixed with a Multi-Line string.

This change considers (permanently) offline links as finished. Although
they are not technically finished in being downloaded, they also never
will be so there is no reason not to dispatch `package_finished`. This
allows plugins and scripts to further process the files.

Note that this does not adjust the progress display in queue, allowing
users to check why their progress never finishes and decide what to do
with the package or links in question.

Fixes pyload#4392
@GammaC0de
Copy link
Member

Sorry to be a party pooper but I think the PR is unneeded as you could use the package_processed event (which fires when a package is processed successfully or not) instead of package_finished event for trying to extract the package.
Doing so, you could also choose between package_processed or package_finished for extraction as a config option..

@mihawk90
Copy link
Contributor Author

I'll have to say I'm a little confused here. By "use the package_processed event" do you mean in a script? But wouldn't that mean that every user would have go install or write a script for this? That feels... an odd way to handle this.

I don't really see a benefit in not seeing offline links as finished. What purpose does it serve to count them as "unfinished" when they will never be finished to begin with?
Am I missing something here where this would be useful?

@GammaC0de
Copy link
Member

First of all, all the events are first available to plugins, the ExternalScripts.py plugin is responsible to pass those events to scripts.
You can see in line 185 of ExtractArchive.py where package_finished is used.

Whether offline should be considered as success, this is a really good question which I'm not sure what is the correct answer for it, maybe you're right (we have to investigate what is the impact of this beside for archive extraction)

I think a more elegant solution would be to hook the package_processed event and within it, do a check of the status of each files and decide if to do extraction or not.

What do you think?

@mihawk90
Copy link
Contributor Author

mihawk90 commented Nov 19, 2023

I think a more elegant solution would be to hook the package_processed event and within it, do a check of the status of each files and decide if to do extraction or not.

Hm that's certainly possible, but I feel like that's a lot of work for little benefit. I mean realistically what are the differences between the two?

Although I do think that these are 2 separate (although connected) issues:

  1. Figure out whether an offline file should be considered finished or not
  2. How to handle the archive extraction

Also it just came to me that I didn't actually test how archive extraction and possible deletion works. I just assumed the case of a failed archive extraction is handled already and it wouldn't delete the files and just leave them in that case, however I didn't check this so I'm not sure how that works. The way this PR works would be an issue if files were deleted despite the extraction failing.


I think at the end of the day it comes down to what people are using the package_finished event for, i.e. what they do with Scripts to process it. Maybe make it an option in the Downloads section? Default on or off would be the question then.


So this is the way I thought it would work. Say we have a file list of 4 files:

file1.part1.rar
file1.part2.rar
file2.part1.rar
file2.part2.rar <- offline

What happens with this PR is once the first 3 files are finished it fires the finished event. For me that just unpacks, for others that might cause some scripts to run. Either way it processes those 3 files, Archive extraction would of course fail for file2 since the last part is missing, while the files for file1 would be deleted.

If we processed every link individually, what does that get us? First off we'd have to track by filename what files belong together. I know this is already somewhat done for the file deletion option, but I haven't looked into how it collects those (might just be from unrar's output? I don't know).
And then if any one file doesn't exist, we don't start the extraction. So far so good. But why don't we just let the extractor handle this? If a volume is missing the extraction just fails (we already catch this), and nothing bad happens otherwise. On the contrary at least files that could be extracted are.


I looked into how the extraction currently works a bit more:

So currently we run unrar l -v to get the header including whether it's password protected. Fortunately, the -v switch already makes it complain about a missing volume:

z% ls -1
scn-drwh8-S08E01.part1.rar
scn-drwh8-S08E01.part2.rar
scn-drwh8-S08E01.part3.rar
scn-drwh8-S08E01.part4.rar
scn-drwh8-S08E01.part5.rar
scn-drwh8-S08E01.part6.rar~
z% unrar v -v scn-drwh8-S08E01.part1.rar 1>out.txt 2>err.txt
z% echo $?
0
z% cat out.txt 

UNRAR 6.24 freeware      Copyright (c) 1993-2023 Alexander Roshal

Archive: scn-drwh8-S08E01.part1.rar
Details: RAR 4, volume, recovery record

 Attributes      Size    Packed Ratio    Date    Time   Checksum  Name
----------- ---------  -------- ----- ---------- -----  --------  ----
    ..A....     51330     51330 100%  2015-03-11 17:19  19E3CB76  <dir>/Subs/<file>-eng.idx
    ..A....   8153088   8153088 100%  2015-03-11 17:19  623704F2  <dir>/Subs/<file>-eng.sub
    ..A....     37605     37605 100%  2015-03-11 17:19  FF7D652A  <dir>/Subs/<file>.idx
    ..A....   7415808   7415808 100%  2015-03-11 17:19  F4E9E6DD  <dir>/Subs/<file>.sub
    ..A.... 5806866633 1003486326  -->  2015-03-11 22:12  0271C59A  <dir>/<file>.mkv
----------- ---------  -------- ----- ---------- -----  --------  ----
           5822524464 1019144157  17%  volume 1                    5

Archive: scn-drwh8-S08E01.part2.rar
Details: RAR 4, volume, recovery record

 Attributes      Size    Packed Ratio    Date    Time   Checksum  Name
----------- ---------  -------- ----- ---------- -----  --------  ----
    ..A.... 5806866633 1019144725  <->  2015-03-11 22:12  4713E52A  <dir>/<file>.mkv
----------- ---------  -------- ----- ---------- -----  --------  ----
                    0 1019144725   0%  volume 2                    0

Archive: scn-drwh8-S08E01.part3.rar
Details: RAR 4, volume, recovery record

 Attributes      Size    Packed Ratio    Date    Time   Checksum  Name
----------- ---------  -------- ----- ---------- -----  --------  ----
    ..A.... 5806866633 1019144725  <->  2015-03-11 22:12  E8AABB91  <dir>/<file>.mkv
----------- ---------  -------- ----- ---------- -----  --------  ----
                    0 1019144725   0%  volume 3                    0

Archive: scn-drwh8-S08E01.part4.rar
Details: RAR 4, volume, recovery record

 Attributes      Size    Packed Ratio    Date    Time   Checksum  Name
----------- ---------  -------- ----- ---------- -----  --------  ----
    ..A.... 5806866633 1019144725  <->  2015-03-11 22:12  724C136A  <dir>/<file>.mkv
----------- ---------  -------- ----- ---------- -----  --------  ----
                    0 1019144725   0%  volume 4                    0

Archive: scn-drwh8-S08E01.part5.rar
Details: RAR 4, volume, recovery record

 Attributes      Size    Packed Ratio    Date    Time   Checksum  Name
----------- ---------  -------- ----- ---------- -----  --------  ----
    ..A.... 5806866633 1019144725  <->  2015-03-11 22:12  DC2F3B40  <dir>/<file>.mkv
----------- ---------  -------- ----- ---------- -----  --------  ----
                    0 1019144725   0%  volume 5                    0
                5822524464 5095723057  87%                              5
z% cat err.txt 

Cannot find volume scn-drwh8-S08E01.part6.rar

It just seems this information isn't processed at all, even though the error is actually going to stderr, although exit status is 0 for whatever reason:

5376	2023-11-19 18:42:18	INFO	pyload	Download finished: scn-drwh8-S08E01.part1.rar

5400	2023-11-19 18:42:19	INFO	pyload	Download finished: scn-drwh8-S08E01.part2.rar

5446	2023-11-19 18:42:31	INFO	pyload	Download finished: scn-drwh8-S08E01.part4.rar

5455	2023-11-19 18:42:34	INFO	pyload	Download finished: scn-drwh8-S08E01.part3.rar

5462	2023-11-19 18:42:35	INFO	pyload	Download finished: scn-drwh8-S08E01.part5.rar

5468	2023-11-19 18:42:35	DEBUG	pyload	All downloads processed

5476	2023-11-19 18:42:35	DEBUG	pyload	EXTRACTOR UnRar: EXECUTE unrar l -scf -o- -or -x*.nfo -x*.DS_Store -xindex.dat -xthumb.db -y -c- -p- -v /home/tarulia/Downloads/pyload/dev-finish-offline-links/scn-drwh8-S08E01.part1.rar
5477	2023-11-19 18:42:35	DEBUG	pyload	ADDON ExtractArchive: Password Correct
5478	2023-11-19 18:42:35	DEBUG	pyload	ADDON ExtractArchive: Extracting using password: None
5479	2023-11-19 18:42:35	DEBUG	pyload	EXTRACTOR UnRar: EXECUTE unrar x -scf -o- -or -x*.nfo -x*.DS_Store -xindex.dat -xthumb.db -y -c- -p- /home/tarulia/Downloads/pyload/dev-finish-offline-links/scn-drwh8-S08E01.part1.rar /home/tarulia/Downloads/pyload/_extracted/
5480	2023-11-19 18:42:58	ERROR	pyload	ADDON ExtractArchive: scn-drwh8-S08E01.part1.rar | CRC mismatch | Cannot find volume /home/tarulia/Downloads/pyload/dev-finish-offline-links/scn-drwh8-S08E01.part6.rar

If I'm reading this right (sorry I'm rusty with Python), the out is written, but err is just an empty variable:

def _check_archive_encryption(self):
if self.archive_encryption is None:
p = self.call_cmd("l", "-v", self.filename)
out, err = (_r.strip() if _r else "" for _r in p.communicate())
encrypted_header = self._RE_ENCRYPTED_HEADER.search(out) is not None
encrypted_files = any((m.group(1) == "*" for m in self._RE_FILES.finditer(out)))
self.archive_encryption = (encrypted_header, encrypted_files)
return self.archive_encryption

It parses out for the header, but never bothers to check what stderr returns.

So, if we could catch this we don't need to do a whole lot of work checking links individually.

@mihawk90
Copy link
Contributor Author

I actually just went through with a debugger and turns out err is actually already set with the stderr message, it's just not used.

I threw in a 2-liner into _check_archive_encryption:

            if err:
                raise ArchiveError(err)

And as expected it stops the extraction step alltogether, plus logs it to the log-output, so that is probably the most elegant way to do it.

Should I add this to this PR? Since it's not directly related to how offline links are handled (but sort of related).

@mihawk90
Copy link
Contributor Author

Sorry, the first comment got a little longer then intended 👀

But TLDR while the archive extraction was my initial motivation, IMO offline links should always be treated as finished because they will never progress beyond the initial status.


But you're right, needs some investigation what other potential impact it might have. That said, I did check before submitting the PR and get_unfinished at least is only referenced once, in check_package_finished:

def check_package_finished(self, pyfile):
"""
checks if package is finished and calls addon_manager.
"""
ids = self.pyload.db.get_unfinished(pyfile.packageid)
if not ids or (pyfile.id in ids and len(ids) == 1):
if not pyfile.package().set_finished:
self.pyload.log.info(
self._("Package finished: {}").format(pyfile.package().name)
)
self.pyload.addon_manager.package_finished(pyfile.package())
pyfile.package().set_finished = True

However I did not check what exactly it does in the addon manager.

@GammaC0de
Copy link
Member

By "use the package_processed event" do you mean in a script?

I was referring to something like this:

    def package_processed(self, pypack):
        processed_successfully = all(
            fdata.get("status") in (0, 1, 4)
            for fid, fdata in pypack.get_children().items()
        )
        if processed_successfully:
            self.queue.add(pypack.id)
            if not self.config.get("waitall") and not self.extracting:
                self.extract_queued()

@mihawk90
Copy link
Contributor Author

That would work purely for extracting sure.
But the scope of considering them finished (and therefore this PR) is bigger then just extracting (e.g. scripts or other plugins that process that event). I just can't see a scenario were considering them as unfinished has any benefit.
Having the extraction run is just a fortunate side effect here.

Either way, I dug a little to see what this would actually entail.
So as mentioned above when a package is finished it calls the addon manager:

self.pyload.addon_manager.package_finished(pyfile.package())

Which then in turns calls every registered/activated plugins' package_finished():

@lock
def package_finished(self, package):
for plugin in self.plugins:
if plugin.is_activated():
plugin.package_finished(package)
self.dispatch_event("package_finished", package)

Now when I search the code, luckily it's not super many occurrences:

$ pwd && git grep --line-number "package_finished("
/home/tarulia/Development/pyload/src/pyload/plugins
addons/Checksum.py:265:    def package_finished(self, pypack):
addons/ExternalScripts.py:241:    def package_finished(self, pypack):
addons/ExtractArchive.py:185:    def package_finished(self, pypack):
addons/IRC.py:77:    def package_finished(self, pypack):
addons/MergeFiles.py:23:    def package_finished(self, pack):
addons/XMPP.py:243:    def package_finished(self, pypack):
base/addon.py:185:    def package_finished(self, pypack):
base/notifier.py:84:    def package_finished(self, pypack):

Checksum.py:
Passes the package on to verify_package which from what I can tell grabs all files in the packages storage folder and runs a checksum on them. So an offline link doesn't seem to have any impact on whether this succeeds or not.

ExternalScripts.py:
Self-explanatory
However as mentioned above I guess whether this has any impact here depends on what people do in their package_finished scripts and obviously that's impossible to say in a PR. I'm actually wondering not only what people are using these scripts for but also how much they are even used in the first place.
Either way, the only way I see a way out of that is either scripters having to adjust for offline links (which shouldn't be hard to do since the package ID/name is already passed, or giving users an option where they can choose whether they want offline links as finished to begin with.

ExtractArchive.py:
Covered that

IRC.py:
I'm honestly not entirely sure since I haven't looked much at the plugin but from what I can tell it just sends a private message to... someone? I don't know who or what is referred to as "owner" here:

for t in self.config.get("owner").split():

MergeFiles.py:
Can't say I get all of it but from my understanding it builds a file list (by searching filenames via regex) first and then goes on to do the merging. It doesn't seem to do a sanity check for whether all files are present, so that might be an issue. I'm not sure what happens when a part is missing because I can't see any kind of error handling, it seems to just read from the split file and append it to the new "final" file. So this would result in a broken merged file.
Honestly though I think that's a massive oversight in the implementation as it is anyway. Because it assumes that the package contains every split part, so if a user removes an offline link before the package finishes (or is missing one to begin with), the result is effectively the same.
On a separate note I haven't seen HJsplit files around for years, since the software has been Abandonware since 2010 and RAR/7z have pretty much taken over the space, so I'm not sure how relevant this even is.

XMPP.py:
Effectively the same as IRC.py.

notifier.py:
I couldn't find how or where this is used because I don't see it in my plugins list... is that an OG-pyload plugin maybe? Either way it seems to just send notifications for when it's getting the event which arguably would actually be a good thing in this case. That way the user gets notified about a missing link so it can be swapped instead of just... never finishing/notifying.


Of course it also dispatches the package_finished event but I couldn't quite figure out where/how that's used... Either way from what I can tell at least from the addon-manager side the change wouldn't be destructive at least.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants