Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Socket leak when pipeline in use. #475

Closed
paul-1 opened this issue Dec 15, 2020 · 10 comments
Closed

Socket leak when pipeline in use. #475

paul-1 opened this issue Dec 15, 2020 · 10 comments

Comments

@paul-1
Copy link
Contributor

paul-1 commented Dec 15, 2020

Reported here https://forums.slimdevices.com/showthread.php?113321-Memory-Leak-in-Perl-Engine-on-piCorePlayer

When the pipeline reads the EOF from the stream, the source(socket reference) reference is deleted without calling "close"

https://github.com/Logitech/slimserver/blob/b7d9ed8e7356981cb9d5ce2cea67bd5f1d7b6ee3/Slim/Player/Pipeline.pm#L285-L298

@philippe44

Does $source in this instance always have a "close" routine, or does that need trapped incase there is no close routine associated with the pipeline source.

@philippe44
Copy link
Contributor

At first glance, I think you're 100% right. In Slim::Player:Song::open(), the $sock is replaced by the pipeline object {~#600). So that $sock is properly closed its creator, but the $sock that is the initial true source is not closed by the pipeline reader when reaching the end of the pipe. When an early close happens, the pipeline->close is called and that one closes the initial true source.

Feels to me you nailed it

@philippe44
Copy link
Contributor

And if I comment out the delete line, then at when pipeline closes, the 'source' hash is still defined and then it's closed properly. I have no idea why this delete is here, feel to me it should not

@paul-1
Copy link
Contributor Author

paul-1 commented Dec 15, 2020

It's a bit nicer to the sender to close the port as soon as you know it's ready to be closed. With all of the buffers in the pipeline and player, it can have over a minute of the track in the buffers before the pipeline closes cleanly.

I just inserted a $source->close() before the undef.

@michaelherger
Copy link
Member

I'm so glad there are people who know this stuff... I try to summarise: when we transcode online streams, we leave open connections behind? And this went unnoticed for a decade because previously online streams were mostly mp3 or AAC, and this either didn't need transcoding, or wasn't supported?

Is adding the close the fix, or just a test?

@philippe44
Copy link
Contributor

philippe44 commented Dec 15, 2020

I would just remove the undelete which probably should not be there but on the other hand, @paul-1 is right, it's nicer to the other end to close sooner. So my recommendation would be to not delete the key and close

@philippe44
Copy link
Contributor

philippe44 commented Dec 15, 2020

I'm so glad there are people who know this stuff... I try to summarise: when we transcode online streams, we leave open connections behind? And this went unnoticed for a decade because previously online streams were mostly mp3 or AAC, and this either didn't need transcoding, or wasn't supported?

Is adding the close the fix, or just a test?

I could not resist and had to look into the past :-) Was introduced in 2008 758c9b2#diff-f2722488b28d1f01877224b4a9cb397ee54e1c85eef8ec57f9602caecc839e93 and yes @michaelherger, this is a 12-years old bug :-)

@michaelherger
Copy link
Member

diff --git a/Slim/Player/Pipeline.pm b/Slim/Player/Pipeline.pm
index 2a568a546..8cd8216be 100644
--- a/Slim/Player/Pipeline.pm
+++ b/Slim/Player/Pipeline.pm
@@ -286,6 +286,7 @@ sub sysread {
                                if (defined $socketReadlen) {
                                        # EOF
                                        main::INFOLOG && $log->info("EOF on source stream");
+                                       $source->close();
                                        undef $source;
                                        delete ${*$self}{'pipeline_source'};
                                        $writer->close();

?

@philippe44
Copy link
Contributor

philippe44 commented Dec 15, 2020

I looked back at the code before and the delete was not there, I think it's a mistake that should be removed, or we should also delete the writer because it's inconsistent with the Pipeline::close. For me, the pipeline, which is an IO::Handle child works this way

  • upon creation, it receives the $sock of the "true" $source (Tidal)
  • it creates a $writer handler which is connected to the input of the transcoder and where it will write data taken from $source
  • it creates a $reader handler which is connected to the output of the transcoder and returns it to its creator so that the creator gets data from it by using IO::Handle usual methods.

As far as the pipeline creator (a $song object) is concerned, it receives a $sock object he can use "normally". When this $sock object return 0 bytes, then $song calls $sock->close() which, in case of pipeline, closes $reader and $writer and does a normal socket close() if there is no pipeline.

So, with pipeline, 0 bytes is returned from sysread when nothing comes back from the transcoder. With no pipeline, 0 bytes is returned from sysread when nothing comes from the source stream. So maybe a bit earlier, but just by the size of the transcoder buffers.

[...few back and forth later...]

The issue is that the pipeline::sysread() must be called even after $source is done as we need to empty the transcoder. But we don't want to continue using $source, so that's why, seing how the loop is written, that $source key must be deleted and if it is, the $source must be closed before otherwise the pipeline:close() won't be able to do it. Then we must close the $write as well because this is the way we tell the transcoder that we are done...

Still, I think that it would be more logical to delete the $writer key as well and looking at the code before, it does not prevent sysread to empty the remaining bytes in the transcoder as we pass this test unless (defined($reader) && (!defined($source) || defined($writer)))

Anyway, you're right @paul-1 , we must at least add the close, I just now feel this function is not very clean unless we delete the $writer's key as well so that Pipeline::close() does not close $writer which is already closed. This whole sysread also seemed strange to me as I don't understand the need for a label in the last as well as undefining $source. I'll try that tomorrow

sub sysread {
	my $self = $_[0];
	my $chunksize = $_[2];
	my $readlen;

	my $error = ${*$self}{'pipeline_error'};

	if ($error) {
		$! = -1;
		return undef;
	}

	my $reader = ${*$self}{'pipeline_reader'};
	my $writer = ${*$self}{'pipeline_writer'};
	my $source = ${*$self}{'pipeline_source'};

	unless (defined($reader) && (!defined($source) || defined($writer))) {
		$! = EWOULDBLOCK;
		return undef;
	}

	# First try to stuff the pipe
	STUFF_PIPE:	while (defined($source)) {

		my $pendingBytes = ${*$self}{'pipeline_pending_bytes'};
		my $pendingSize  = ${*$self}{'pipeline_pending_size'};

		if (!$pendingSize) {

			main::DEBUGLOG && $log->debug("Pipeline doesn't have pending bytes - trying to get some from source");

			my $socketReadlen = $source->sysread($pendingBytes, $chunksize);

			if (!$socketReadlen) {
				if (defined $socketReadlen) {
					# EOF
					main::INFOLOG && $log->info("EOF on source stream");
					undef $source;
                                        # I would comment out these two
					# delete ${*$self}{'pipeline_source'};
					# $writer->close();**
					last STUFF_PIPE;
				} elsif ($! == EWOULDBLOCK || $! == EINTR) {
					last STUFF_PIPE;		
				} else {
					return undef; # reflect error to caller
				}
			}

			$pendingSize = $socketReadlen;
		}

		main::DEBUGLOG && $log->debug("Attempting to write to pipeline writer");

		my $writelen = $writer->syswrite($pendingBytes, $pendingSize);

		if ($writelen) {

			main::DEBUGLOG && $log->debug("Wrote $writelen bytes to pipeline writer");

			if ($writelen != $pendingSize) {
				${*$self}{'pipeline_pending_bytes'} = substr($pendingBytes, $writelen);
				${*$self}{'pipeline_pending_size'}  = $pendingSize - $writelen;
			}
			else {
				${*$self}{'pipeline_pending_bytes'} = '';
				${*$self}{'pipeline_pending_size'}  = 0;
			}
		}
		else {

			${*$self}{'pipeline_pending_bytes'} = $pendingBytes;
			${*$self}{'pipeline_pending_size'}  = $pendingSize;

			if ($! != EWOULDBLOCK) {
				return undef;	# reflect error to caller
			}

			last STUFF_PIPE;
		}

	}

	return $reader->sysread($_[1], $chunksize);
}

@paul-1
Copy link
Contributor Author

paul-1 commented Dec 15, 2020

Just thinking out loud, is this pipeline the same path taken when transcoding a local file? If so it is leaving an open file file handle at the OS level. Would that change the thinking?

@mherger
Copy link
Contributor

mherger commented Dec 15, 2020

Thanks guys! Fixed by #476.

@mherger mherger closed this as completed Dec 15, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants