(serve) ubuntu@ip-172-31-17-115:~/pt/serve$ pip install . Processing /home/ubuntu/pt/serve Requirement already satisfied: Pillow in /home/ubuntu/anaconda3/envs/serve/lib/python3.8/site-packages (from torchserve==0.0.1b20200215) (7.0.0) Requirement already satisfied: psutil in /home/ubuntu/anaconda3/envs/serve/lib/python3.8/site-packages (from torchserve==0.0.1b20200215) (5.6.7) Collecting future Downloading future-0.18.2.tar.gz (829 kB) |████████████████████████████████| 829 kB 2.9 MB/s Requirement already satisfied: torch in /home/ubuntu/anaconda3/envs/serve/lib/python3.8/site-packages (from torchserve==0.0.1b20200215) (1.4.0) Requirement already satisfied: torchvision in /home/ubuntu/anaconda3/envs/serve/lib/python3.8/site-packages (from torchserve==0.0.1b20200215) (0.5.0) Requirement already satisfied: torchtext in /home/ubuntu/anaconda3/envs/serve/lib/python3.8/site-packages (from torchserve==0.0.1b20200215) (0.5.0) Requirement already satisfied: numpy in /home/ubuntu/anaconda3/envs/serve/lib/python3.8/site-packages (from torchvision->torchserve==0.0.1b20200215) (1.18.1) Requirement already satisfied: six in /home/ubuntu/anaconda3/envs/serve/lib/python3.8/site-packages (from torchvision->torchserve==0.0.1b20200215) (1.14.0) Requirement already satisfied: tqdm in /home/ubuntu/anaconda3/envs/serve/lib/python3.8/site-packages (from torchtext->torchserve==0.0.1b20200215) (4.42.1) Requirement already satisfied: requests in /home/ubuntu/anaconda3/envs/serve/lib/python3.8/site-packages (from torchtext->torchserve==0.0.1b20200215) (2.22.0) Collecting sentencepiece Downloading sentencepiece-0.1.85-cp38-cp38-manylinux1_x86_64.whl (1.0 MB) |████████████████████████████████| 1.0 MB 15.5 MB/s Requirement already satisfied: idna<2.9,>=2.5 in /home/ubuntu/anaconda3/envs/serve/lib/python3.8/site-packages (from requests->torchtext->torchserve==0.0.1b20200215) (2.8) Requirement already satisfied: chardet<3.1.0,>=3.0.2 in /home/ubuntu/anaconda3/envs/serve/lib/python3.8/site-packages (from requests->torchtext->torchserve==0.0.1b20200215) (3.0.4) Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /home/ubuntu/anaconda3/envs/serve/lib/python3.8/site-packages (from requests->torchtext->torchserve==0.0.1b20200215) (1.25.8) Requirement already satisfied: certifi>=2017.4.17 in /home/ubuntu/anaconda3/envs/serve/lib/python3.8/site-packages (from requests->torchtext->torchserve==0.0.1b20200215) (2019.11.28) Building wheels for collected packages: torchserve, future Building wheel for torchserve (setup.py) ... error ERROR: Command errored out with exit status 1: command: /home/ubuntu/anaconda3/envs/serve/bin/python -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-req-build-88c_hysm/setup.py'"'"'; __file__='"'"'/tmp/pip-req-build-88c_hysm/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' bdist_wheel -d /tmp/pip-wheel-c8htazac cwd: /tmp/pip-req-build-88c_hysm/ Complete output (1366 lines): running bdist_wheel running build running build_py running build_frontend Downloading https://services.gradle.org/distributions/gradle-4.9-bin.zip ........................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................ Unzipping /home/ubuntu/.gradle/wrapper/dists/gradle-4.9-bin/e9cinqnqvph59rr7g70qubb4t/gradle-4.9-bin.zip to /home/ubuntu/.gradle/wrapper/dists/gradle-4.9-bin/e9cinqnqvph59rr7g70qubb4t Set executable permissions for: /home/ubuntu/.gradle/wrapper/dists/gradle-4.9-bin/e9cinqnqvph59rr7g70qubb4t/gradle-4.9/bin/gradle Welcome to Gradle 4.9! Here are the highlights of this release: - Experimental APIs for creating and configuring tasks lazily - Pass arguments to JavaExec via CLI - Auxiliary publication dependency support for multi-project builds - Improved dependency insight report For more details see https://docs.gradle.org/4.9/release-notes.html Starting a Gradle Daemon (subsequent builds will be faster) Download https://plugins.gradle.org/m2/com/google/googlejavaformat/google-java-format/1.6/google-java-format-1.6.pom Download https://plugins.gradle.org/m2/com/google/googlejavaformat/google-java-format-parent/1.6/google-java-format-parent-1.6.pom Download https://plugins.gradle.org/m2/org/sonatype/oss/oss-parent/7/oss-parent-7.pom Download https://plugins.gradle.org/m2/com/google/errorprone/javac-shaded/9+181-r4173-1/javac-shaded-9+181-r4173-1.pom Download https://plugins.gradle.org/m2/com/google/guava/guava/22.0/guava-22.0.pom Download https://plugins.gradle.org/m2/com/google/guava/guava-parent/22.0/guava-parent-22.0.pom Download https://plugins.gradle.org/m2/com/google/code/findbugs/jsr305/1.3.9/jsr305-1.3.9.pom Download https://plugins.gradle.org/m2/com/google/errorprone/error_prone_annotations/2.0.18/error_prone_annotations-2.0.18.pom Download https://plugins.gradle.org/m2/com/google/errorprone/error_prone_parent/2.0.18/error_prone_parent-2.0.18.pom Download https://plugins.gradle.org/m2/com/google/j2objc/j2objc-annotations/1.1/j2objc-annotations-1.1.pom Download https://plugins.gradle.org/m2/org/codehaus/mojo/animal-sniffer-annotations/1.14/animal-sniffer-annotations-1.14.pom Download https://plugins.gradle.org/m2/org/codehaus/mojo/animal-sniffer-parent/1.14/animal-sniffer-parent-1.14.pom Download https://plugins.gradle.org/m2/org/codehaus/mojo/mojo-parent/34/mojo-parent-34.pom Download https://plugins.gradle.org/m2/org/codehaus/codehaus-parent/4/codehaus-parent-4.pom Download https://plugins.gradle.org/m2/com/google/code/findbugs/jsr305/1.3.9/jsr305-1.3.9.jar Download https://plugins.gradle.org/m2/com/google/errorprone/error_prone_annotations/2.0.18/error_prone_annotations-2.0.18.jar Download https://plugins.gradle.org/m2/com/google/errorprone/javac-shaded/9+181-r4173-1/javac-shaded-9+181-r4173-1.jar Download https://plugins.gradle.org/m2/com/google/j2objc/j2objc-annotations/1.1/j2objc-annotations-1.1.jar Download https://plugins.gradle.org/m2/org/codehaus/mojo/animal-sniffer-annotations/1.14/animal-sniffer-annotations-1.14.jar Download https://plugins.gradle.org/m2/com/google/guava/guava/22.0/guava-22.0.jar Download https://plugins.gradle.org/m2/com/google/googlejavaformat/google-java-format/1.6/google-java-format-1.6.jar Download https://jcenter.bintray.com/commons-cli/commons-cli/1.3.1/commons-cli-1.3.1.pom Download https://jcenter.bintray.com/io/netty/netty-all/4.1.24.Final/netty-all-4.1.24.Final.pom Download https://jcenter.bintray.com/software/amazon/ai/mms-plugins-sdk/1.0.1/mms-plugins-sdk-1.0.1.pom Download https://jcenter.bintray.com/org/apache/commons/commons-parent/37/commons-parent-37.pom Download https://jcenter.bintray.com/io/netty/netty-parent/4.1.24.Final/netty-parent-4.1.24.Final.pom Download https://jcenter.bintray.com/org/apache/apache/16/apache-16.pom Download https://jcenter.bintray.com/org/sonatype/oss/oss-parent/9/oss-parent-9.pom Download https://jcenter.bintray.com/com/google/code/gson/gson/2.8.5/gson-2.8.5.pom Download https://jcenter.bintray.com/org/slf4j/slf4j-api/1.7.25/slf4j-api-1.7.25.pom Download https://jcenter.bintray.com/commons-io/commons-io/2.6/commons-io-2.6.pom Download https://jcenter.bintray.com/com/google/code/gson/gson-parent/2.8.5/gson-parent-2.8.5.pom Download https://jcenter.bintray.com/org/slf4j/slf4j-parent/1.7.25/slf4j-parent-1.7.25.pom Download https://jcenter.bintray.com/org/apache/commons/commons-parent/42/commons-parent-42.pom Download https://jcenter.bintray.com/org/slf4j/slf4j-log4j12/1.7.25/slf4j-log4j12-1.7.25.pom Download https://jcenter.bintray.com/org/apache/apache/18/apache-18.pom Download https://jcenter.bintray.com/log4j/log4j/1.2.17/log4j-1.2.17.pom Download https://jcenter.bintray.com/commons-cli/commons-cli/1.3.1/commons-cli-1.3.1.jar Download https://jcenter.bintray.com/software/amazon/ai/mms-plugins-sdk/1.0.1/mms-plugins-sdk-1.0.1.jar Download https://jcenter.bintray.com/org/slf4j/slf4j-log4j12/1.7.25/slf4j-log4j12-1.7.25.jar Download https://jcenter.bintray.com/org/slf4j/slf4j-api/1.7.25/slf4j-api-1.7.25.jar Download https://jcenter.bintray.com/commons-io/commons-io/2.6/commons-io-2.6.jar Download https://jcenter.bintray.com/com/google/code/gson/gson/2.8.5/gson-2.8.5.jar Download https://jcenter.bintray.com/log4j/log4j/1.2.17/log4j-1.2.17.jar Download https://jcenter.bintray.com/io/netty/netty-all/4.1.24.Final/netty-all-4.1.24.Final.jar > Task :cts:clean UP-TO-DATE > Task :modelarchive:clean UP-TO-DATE > Task :server:killServer No server running! > Task :server:clean UP-TO-DATE > Task :cts:compileJava NO-SOURCE > Task :cts:processResources NO-SOURCE > Task :cts:classes UP-TO-DATE > Task :cts:jar > Task :cts:assemble > Task :cts:checkstyleMain NO-SOURCE > Task :cts:compileTestJava NO-SOURCE > Task :cts:processTestResources NO-SOURCE > Task :cts:testClasses UP-TO-DATE > Task :cts:checkstyleTest NO-SOURCE > Task :cts:findbugsMain NO-SOURCE > Task :cts:findbugsTest NO-SOURCE > Task :cts:test NO-SOURCE > Task :cts:jacocoTestCoverageVerification SKIPPED > Task :cts:jacocoTestReport SKIPPED > Task :cts:pmdMain NO-SOURCE > Task :cts:pmdTest SKIPPED > Task :cts:verifyJava > Task :cts:check > Task :cts:build > Task :modelarchive:compileJava > Task :modelarchive:processResources NO-SOURCE > Task :modelarchive:classes > Task :modelarchive:jar > Task :modelarchive:assemble Download https://jcenter.bintray.com/com/puppycrawl/tools/checkstyle/7.1.2/checkstyle-7.1.2.pom Download https://jcenter.bintray.com/org/antlr/antlr4-runtime/4.5.3/antlr4-runtime-4.5.3.pom Download https://jcenter.bintray.com/org/antlr/antlr4-master/4.5.3/antlr4-master-4.5.3.pom Download https://jcenter.bintray.com/antlr/antlr/2.7.7/antlr-2.7.7.pom Download https://jcenter.bintray.com/commons-beanutils/commons-beanutils/1.9.3/commons-beanutils-1.9.3.pom Download https://jcenter.bintray.com/com/google/guava/guava/19.0/guava-19.0.pom Download https://jcenter.bintray.com/com/google/guava/guava-parent/19.0/guava-parent-19.0.pom Download https://jcenter.bintray.com/org/apache/commons/commons-parent/41/commons-parent-41.pom Download https://jcenter.bintray.com/commons-collections/commons-collections/3.2.2/commons-collections-3.2.2.pom Download https://jcenter.bintray.com/org/apache/commons/commons-parent/39/commons-parent-39.pom Download https://jcenter.bintray.com/commons-collections/commons-collections/3.2.2/commons-collections-3.2.2.jar Download https://jcenter.bintray.com/commons-beanutils/commons-beanutils/1.9.3/commons-beanutils-1.9.3.jar Download https://jcenter.bintray.com/com/google/guava/guava/19.0/guava-19.0.jar Download https://jcenter.bintray.com/org/antlr/antlr4-runtime/4.5.3/antlr4-runtime-4.5.3.jar Download https://jcenter.bintray.com/antlr/antlr/2.7.7/antlr-2.7.7.jar Download https://jcenter.bintray.com/com/puppycrawl/tools/checkstyle/7.1.2/checkstyle-7.1.2.jar > Task :modelarchive:checkstyleMain Download https://jcenter.bintray.com/org/testng/testng/6.8.1/testng-6.8.1.pom Download https://jcenter.bintray.com/org/sonatype/oss/oss-parent/3/oss-parent-3.pom Download https://jcenter.bintray.com/junit/junit/4.10/junit-4.10.pom Download https://jcenter.bintray.com/org/beanshell/bsh/2.0b4/bsh-2.0b4.pom Download https://jcenter.bintray.com/org/yaml/snakeyaml/1.6/snakeyaml-1.6.pom Download https://jcenter.bintray.com/com/beust/jcommander/1.27/jcommander-1.27.pom Download https://jcenter.bintray.com/org/beanshell/beanshell/2.0b4/beanshell-2.0b4.pom Download https://jcenter.bintray.com/org/hamcrest/hamcrest-core/1.1/hamcrest-core-1.1.pom Download https://jcenter.bintray.com/org/hamcrest/hamcrest-parent/1.1/hamcrest-parent-1.1.pom Download https://jcenter.bintray.com/org/hamcrest/hamcrest-core/1.1/hamcrest-core-1.1.jar Download https://jcenter.bintray.com/org/beanshell/bsh/2.0b4/bsh-2.0b4.jar Download https://jcenter.bintray.com/junit/junit/4.10/junit-4.10.jar Download https://jcenter.bintray.com/com/beust/jcommander/1.27/jcommander-1.27.jar Download https://jcenter.bintray.com/org/testng/testng/6.8.1/testng-6.8.1.jar Download https://jcenter.bintray.com/org/yaml/snakeyaml/1.6/snakeyaml-1.6.jar > Task :modelarchive:compileTestJava > Task :modelarchive:processTestResources > Task :modelarchive:testClasses > Task :modelarchive:checkstyleTest Download https://jcenter.bintray.com/com/google/code/findbugs/findbugs/3.0.1/findbugs-3.0.1.pom Download https://jcenter.bintray.com/com/google/code/findbugs/jsr305/2.0.1/jsr305-2.0.1.pom Download https://jcenter.bintray.com/net/jcip/jcip-annotations/1.0/jcip-annotations-1.0.pom Download https://jcenter.bintray.com/commons-lang/commons-lang/2.6/commons-lang-2.6.pom Download https://jcenter.bintray.com/com/apple/AppleJavaExtensions/1.4/AppleJavaExtensions-1.4.pom Download https://jcenter.bintray.com/com/google/code/findbugs/jFormatString/2.0.1/jFormatString-2.0.1.pom Download https://jcenter.bintray.com/org/ow2/asm/asm-commons/5.0.2/asm-commons-5.0.2.pom Download https://jcenter.bintray.com/com/google/code/findbugs/bcel-findbugs/6.0/bcel-findbugs-6.0.pom Download https://jcenter.bintray.com/dom4j/dom4j/1.6.1/dom4j-1.6.1.pom Download https://jcenter.bintray.com/org/ow2/asm/asm-debug-all/5.0.2/asm-debug-all-5.0.2.pom Download https://jcenter.bintray.com/jaxen/jaxen/1.1.6/jaxen-1.1.6.pom Download https://jcenter.bintray.com/org/ow2/asm/asm-parent/5.0.2/asm-parent-5.0.2.pom Download https://jcenter.bintray.com/org/apache/commons/commons-parent/17/commons-parent-17.pom Download https://jcenter.bintray.com/org/ow2/ow2/1.3/ow2-1.3.pom Download https://jcenter.bintray.com/org/apache/apache/7/apache-7.pom Download https://jcenter.bintray.com/xml-apis/xml-apis/1.0.b2/xml-apis-1.0.b2.pom Download https://jcenter.bintray.com/org/ow2/asm/asm-tree/5.0.2/asm-tree-5.0.2.pom Download https://jcenter.bintray.com/org/ow2/asm/asm/5.0.2/asm-5.0.2.pom Download https://jcenter.bintray.com/com/google/code/findbugs/jsr305/2.0.1/jsr305-2.0.1.jar Download https://jcenter.bintray.com/org/ow2/asm/asm-commons/5.0.2/asm-commons-5.0.2.jar Download https://jcenter.bintray.com/com/apple/AppleJavaExtensions/1.4/AppleJavaExtensions-1.4.jar Download https://jcenter.bintray.com/com/google/code/findbugs/jFormatString/2.0.1/jFormatString-2.0.1.jar Download https://jcenter.bintray.com/net/jcip/jcip-annotations/1.0/jcip-annotations-1.0.jar Download https://jcenter.bintray.com/dom4j/dom4j/1.6.1/dom4j-1.6.1.jar Download https://jcenter.bintray.com/commons-lang/commons-lang/2.6/commons-lang-2.6.jar Download https://jcenter.bintray.com/jaxen/jaxen/1.1.6/jaxen-1.1.6.jar Download https://jcenter.bintray.com/xml-apis/xml-apis/1.0.b2/xml-apis-1.0.b2.jar Download https://jcenter.bintray.com/org/ow2/asm/asm/5.0.2/asm-5.0.2.jar Download https://jcenter.bintray.com/org/ow2/asm/asm-tree/5.0.2/asm-tree-5.0.2.jar Download https://jcenter.bintray.com/org/ow2/asm/asm-debug-all/5.0.2/asm-debug-all-5.0.2.jar Download https://jcenter.bintray.com/com/google/code/findbugs/findbugs/3.0.1/findbugs-3.0.1.jar Download https://jcenter.bintray.com/com/google/code/findbugs/bcel-findbugs/6.0/bcel-findbugs-6.0.jar > Task :modelarchive:findbugsMain > Task :modelarchive:findbugsTest Download https://jcenter.bintray.com/org/jacoco/org.jacoco.agent/0.8.1/org.jacoco.agent-0.8.1.pom Download https://jcenter.bintray.com/org/jacoco/org.jacoco.build/0.8.1/org.jacoco.build-0.8.1.pom Download https://jcenter.bintray.com/org/jacoco/org.jacoco.agent/0.8.1/org.jacoco.agent-0.8.1.jar > Task :modelarchive:test Gradle suite > Gradle test > org.pytorch.serve.archive.CoverageTest.test PASSED Gradle suite > Gradle test > org.pytorch.serve.archive.ModelArchiveTest.test PASSED Gradle suite > Gradle test > org.pytorch.serve.archive.ModelArchiveTest.testInvalidURL PASSED Gradle suite > Gradle test > org.pytorch.serve.archive.ModelArchiveTest.testMalformURL PASSED Download https://jcenter.bintray.com/org/jacoco/org.jacoco.ant/0.8.1/org.jacoco.ant-0.8.1.pom Download https://jcenter.bintray.com/org/jacoco/org.jacoco.core/0.8.1/org.jacoco.core-0.8.1.pom Download https://jcenter.bintray.com/org/jacoco/org.jacoco.report/0.8.1/org.jacoco.report-0.8.1.pom Download https://jcenter.bintray.com/org/ow2/asm/asm/6.0/asm-6.0.pom Download https://jcenter.bintray.com/org/ow2/asm/asm-commons/6.0/asm-commons-6.0.pom Download https://jcenter.bintray.com/org/ow2/asm/asm-parent/6.0/asm-parent-6.0.pom Download https://jcenter.bintray.com/org/ow2/asm/asm-tree/6.0/asm-tree-6.0.pom Download https://jcenter.bintray.com/org/ow2/asm/asm-util/6.0/asm-util-6.0.pom Download https://jcenter.bintray.com/org/ow2/asm/asm-analysis/6.0/asm-analysis-6.0.pom Download https://jcenter.bintray.com/org/jacoco/org.jacoco.ant/0.8.1/org.jacoco.ant-0.8.1.jar Download https://jcenter.bintray.com/org/jacoco/org.jacoco.core/0.8.1/org.jacoco.core-0.8.1.jar Download https://jcenter.bintray.com/org/ow2/asm/asm-util/6.0/asm-util-6.0.jar Download https://jcenter.bintray.com/org/jacoco/org.jacoco.report/0.8.1/org.jacoco.report-0.8.1.jar Download https://jcenter.bintray.com/org/ow2/asm/asm-tree/6.0/asm-tree-6.0.jar Download https://jcenter.bintray.com/org/ow2/asm/asm/6.0/asm-6.0.jar Download https://jcenter.bintray.com/org/ow2/asm/asm-analysis/6.0/asm-analysis-6.0.jar Download https://jcenter.bintray.com/org/ow2/asm/asm-commons/6.0/asm-commons-6.0.jar > Task :modelarchive:jacocoTestCoverageVerification > Task :modelarchive:jacocoTestReport Download https://jcenter.bintray.com/net/sourceforge/pmd/pmd-java/5.6.1/pmd-java-5.6.1.pom Download https://jcenter.bintray.com/net/sourceforge/pmd/pmd/5.6.1/pmd-5.6.1.pom Download https://jcenter.bintray.com/net/sourceforge/pmd/pmd-core/5.6.1/pmd-core-5.6.1.pom Download https://jcenter.bintray.com/org/ow2/asm/asm/5.0.4/asm-5.0.4.pom Download https://jcenter.bintray.com/net/java/dev/javacc/javacc/5.0/javacc-5.0.pom Download https://jcenter.bintray.com/net/sourceforge/saxon/saxon/9.1.0.8/saxon-9.1.0.8.pom Download https://jcenter.bintray.com/org/ow2/asm/asm-parent/5.0.4/asm-parent-5.0.4.pom Download https://jcenter.bintray.com/org/apache/commons/commons-lang3/3.4/commons-lang3-3.4.pom Download https://jcenter.bintray.com/commons-io/commons-io/2.4/commons-io-2.4.pom Download https://jcenter.bintray.com/com/beust/jcommander/1.48/jcommander-1.48.pom Download https://jcenter.bintray.com/com/google/code/gson/gson/2.5/gson-2.5.pom Download https://jcenter.bintray.com/org/apache/commons/commons-parent/25/commons-parent-25.pom Download https://jcenter.bintray.com/org/apache/apache/9/apache-9.pom Download https://jcenter.bintray.com/net/sourceforge/saxon/saxon/9.1.0.8/saxon-9.1.0.8-dom.jar Download https://jcenter.bintray.com/net/sourceforge/pmd/pmd-java/5.6.1/pmd-java-5.6.1.jar Download https://jcenter.bintray.com/net/sourceforge/saxon/saxon/9.1.0.8/saxon-9.1.0.8.jar Download https://jcenter.bintray.com/net/sourceforge/pmd/pmd-core/5.6.1/pmd-core-5.6.1.jar Download https://jcenter.bintray.com/commons-io/commons-io/2.4/commons-io-2.4.jar Download https://jcenter.bintray.com/com/beust/jcommander/1.48/jcommander-1.48.jar Download https://jcenter.bintray.com/org/ow2/asm/asm/5.0.4/asm-5.0.4.jar Download https://jcenter.bintray.com/org/apache/commons/commons-lang3/3.4/commons-lang3-3.4.jar Download https://jcenter.bintray.com/net/java/dev/javacc/javacc/5.0/javacc-5.0.jar Download https://jcenter.bintray.com/com/google/code/gson/gson/2.5/gson-2.5.jar > Task :modelarchive:pmdMain > Task :modelarchive:pmdTest SKIPPED > Task :modelarchive:verifyJava > Task :modelarchive:check > Task :modelarchive:build > Task :server:compileJava > Task :server:processResources > Task :server:classes > Task :server:jar > Task :server:assemble > Task :server:checkstyleMain > Task :server:compileTestJava > Task :server:processTestResources > Task :server:testClasses > Task :server:checkstyleTest > Task :server:findbugsMain > Task :server:findbugsTest > Task :server:test Gradle suite STANDARD_OUT 2020-02-15 20:16:19,287 [INFO ] Test worker org.pytorch.serve.ModelServer - TS Home: /tmp/pip-req-build-88c_hysm Current directory: /tmp/pip-req-build-88c_hysm/frontend/server Temp directory: /tmp Number of GPUs: 4 Number of CPUs: 32 Max heap size: 27305 M Python executable: python Config file: src/test/resources/config.properties Inference address: https://127.0.0.1:8443 Management address: unix:/tmp/management.sock Model Store: /tmp/pip-req-build-88c_hysm/frontend/modelarchive/src/test/resources/models Initial Models: noop.mar Log dir: /tmp/pip-req-build-88c_hysm/frontend/server/build/logs Metrics dir: /tmp/pip-req-build-88c_hysm/frontend/server/build/logs Netty threads: 0 Netty client threads: 0 Default workers per model: 4 Blacklist Regex: N/A Maximum Response Size: 6553500 Maximum Request Size: 10485760 2020-02-15 20:16:19,297 [INFO ] Test worker org.pytorch.serve.ModelServer - Loading initial models: noop.mar 2020-02-15 20:16:19,396 [DEBUG] Test worker org.pytorch.serve.wlm.ModelVersionedRefs - Adding new version 1.11 for model noop 2020-02-15 20:16:19,397 [DEBUG] Test worker org.pytorch.serve.wlm.ModelVersionedRefs - Setting default version to 1.11 for model noop 2020-02-15 20:16:19,397 [INFO ] Test worker org.pytorch.serve.wlm.ModelManager - Model noop loaded. 2020-02-15 20:16:19,397 [DEBUG] Test worker org.pytorch.serve.wlm.ModelManager - updateModel: noop, count: 4 2020-02-15 20:16:19,421 [INFO ] Test worker org.pytorch.serve.ModelServer - Initialize Inference server with: EpollServerSocketChannel. 2020-02-15 20:16:19,506 [INFO ] W-9003-noop_1.11-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Listening on port: /tmp/.ts.sock.9003 2020-02-15 20:16:19,507 [INFO ] W-9000-noop_1.11-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Listening on port: /tmp/.ts.sock.9000 2020-02-15 20:16:19,507 [INFO ] W-9003-noop_1.11-stdout org.pytorch.serve.wlm.WorkerLifeCycle - [PID]5058 2020-02-15 20:16:19,507 [INFO ] W-9000-noop_1.11-stdout org.pytorch.serve.wlm.WorkerLifeCycle - [PID]5056 2020-02-15 20:16:19,507 [INFO ] W-9002-noop_1.11-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Listening on port: /tmp/.ts.sock.9002 2020-02-15 20:16:19,507 [INFO ] W-9003-noop_1.11-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Torch worker started. 2020-02-15 20:16:19,507 [INFO ] W-9000-noop_1.11-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Torch worker started. 2020-02-15 20:16:19,507 [INFO ] W-9002-noop_1.11-stdout org.pytorch.serve.wlm.WorkerLifeCycle - [PID]5055 2020-02-15 20:16:19,507 [INFO ] W-9003-noop_1.11-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Python runtime: 3.8.1 2020-02-15 20:16:19,507 [INFO ] W-9000-noop_1.11-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Python runtime: 3.8.1 2020-02-15 20:16:19,507 [INFO ] W-9002-noop_1.11-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Torch worker started. 2020-02-15 20:16:19,507 [INFO ] W-9002-noop_1.11-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Python runtime: 3.8.1 2020-02-15 20:16:19,507 [DEBUG] W-9002-noop_1.11 org.pytorch.serve.wlm.WorkerThread - W-9002-noop_1.11 State change null -> WORKER_STARTED 2020-02-15 20:16:19,507 [DEBUG] W-9000-noop_1.11 org.pytorch.serve.wlm.WorkerThread - W-9000-noop_1.11 State change null -> WORKER_STARTED 2020-02-15 20:16:19,507 [DEBUG] W-9003-noop_1.11 org.pytorch.serve.wlm.WorkerThread - W-9003-noop_1.11 State change null -> WORKER_STARTED 2020-02-15 20:16:19,508 [INFO ] W-9001-noop_1.11-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Listening on port: /tmp/.ts.sock.9001 2020-02-15 20:16:19,508 [INFO ] W-9001-noop_1.11-stdout org.pytorch.serve.wlm.WorkerLifeCycle - [PID]5057 2020-02-15 20:16:19,508 [INFO ] W-9001-noop_1.11-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Torch worker started. 2020-02-15 20:16:19,508 [DEBUG] W-9001-noop_1.11 org.pytorch.serve.wlm.WorkerThread - W-9001-noop_1.11 State change null -> WORKER_STARTED 2020-02-15 20:16:19,508 [INFO ] W-9001-noop_1.11-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Python runtime: 3.8.1 2020-02-15 20:16:19,516 [INFO ] W-9002-noop_1.11 org.pytorch.serve.wlm.WorkerThread - Connecting to: /tmp/.ts.sock.9002 2020-02-15 20:16:19,516 [INFO ] W-9001-noop_1.11 org.pytorch.serve.wlm.WorkerThread - Connecting to: /tmp/.ts.sock.9001 2020-02-15 20:16:19,516 [INFO ] W-9000-noop_1.11 org.pytorch.serve.wlm.WorkerThread - Connecting to: /tmp/.ts.sock.9000 2020-02-15 20:16:19,516 [INFO ] W-9003-noop_1.11 org.pytorch.serve.wlm.WorkerThread - Connecting to: /tmp/.ts.sock.9003 2020-02-15 20:16:19,641 [INFO ] Test worker org.pytorch.serve.ModelServer - Inference API bind to: https://127.0.0.1:8443 2020-02-15 20:16:19,641 [INFO ] Test worker org.pytorch.serve.ModelServer - Initialize Management server with: EpollServerDomainSocketChannel. 2020-02-15 20:16:19,643 [INFO ] Test worker org.pytorch.serve.ModelServer - Management API bind to: unix:/tmp/management.sock Gradle suite > Gradle test STANDARD_OUT 2020-02-15 20:16:19,645 [INFO ] W-9001-noop_1.11-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Connection accepted: /tmp/.ts.sock.9001. 2020-02-15 20:16:19,645 [INFO ] W-9003-noop_1.11-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Connection accepted: /tmp/.ts.sock.9003. 2020-02-15 20:16:19,645 [INFO ] W-9000-noop_1.11-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Connection accepted: /tmp/.ts.sock.9000. 2020-02-15 20:16:19,645 [INFO ] W-9002-noop_1.11-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Connection accepted: /tmp/.ts.sock.9002. Gradle suite > Gradle test > org.pytorch.serve.util.ConfigManagerTest.test STANDARD_OUT 2020-02-15 20:16:19,689 [DEBUG] Test worker TS_METRICS - [TestMetric1.Milliseconds:null|#Level:Model|#hostname:null,requestID:12345,timestamp:1542157988, TestMetric2.Milliseconds:null|#Level:Model|#hostname:null,requestID:23478,timestamp:1542157988] 2020-02-15 20:16:19,701 [DEBUG] Test worker MODEL_METRICS - [TestMetric1.Milliseconds:null|#Level:Model|#hostname:null,requestID:12345,timestamp:1542157988, TestMetric2.Milliseconds:null|#Level:Model|#hostname:null,requestID:23478,timestamp:1542157988] 2020-02-15 20:16:19,701 [DEBUG] Test worker MODEL_LOG - test model_log 2020-02-15 20:16:19,705 [INFO ] W-9000-noop_1.11 org.pytorch.serve.wlm.WorkerThread - Backend response time: 20 2020-02-15 20:16:19,705 [INFO ] W-9002-noop_1.11 org.pytorch.serve.wlm.WorkerThread - Backend response time: 20 2020-02-15 20:16:19,705 [INFO ] W-9003-noop_1.11 org.pytorch.serve.wlm.WorkerThread - Backend response time: 20 2020-02-15 20:16:19,705 [INFO ] W-9001-noop_1.11 org.pytorch.serve.wlm.WorkerThread - Backend response time: 20 2020-02-15 20:16:19,706 [DEBUG] W-9000-noop_1.11 org.pytorch.serve.wlm.WorkerThread - W-9000-noop_1.11 State change WORKER_STARTED -> WORKER_MODEL_LOADED 2020-02-15 20:16:19,706 [DEBUG] W-9001-noop_1.11 org.pytorch.serve.wlm.WorkerThread - W-9001-noop_1.11 State change WORKER_STARTED -> WORKER_MODEL_LOADED 2020-02-15 20:16:19,706 [DEBUG] W-9003-noop_1.11 org.pytorch.serve.wlm.WorkerThread - W-9003-noop_1.11 State change WORKER_STARTED -> WORKER_MODEL_LOADED 2020-02-15 20:16:19,706 [INFO ] W-9000-noop_1.11 TS_METRICS - W-9000-noop_1.11.ms:295|#Level:Host|#hostname:ip-172-31-17-115,timestamp:1581797779 2020-02-15 20:16:19,706 [INFO ] W-9001-noop_1.11 TS_METRICS - W-9001-noop_1.11.ms:292|#Level:Host|#hostname:ip-172-31-17-115,timestamp:1581797779 2020-02-15 20:16:19,706 [INFO ] W-9003-noop_1.11 TS_METRICS - W-9003-noop_1.11.ms:292|#Level:Host|#hostname:ip-172-31-17-115,timestamp:1581797779 2020-02-15 20:16:19,706 [DEBUG] W-9002-noop_1.11 org.pytorch.serve.wlm.WorkerThread - W-9002-noop_1.11 State change WORKER_STARTED -> WORKER_MODEL_LOADED 2020-02-15 20:16:19,706 [INFO ] W-9002-noop_1.11 TS_METRICS - W-9002-noop_1.11.ms:292|#Level:Host|#hostname:ip-172-31-17-115,timestamp:1581797779 Gradle suite > Gradle test > org.pytorch.serve.util.ConfigManagerTest.test PASSED Gradle suite > Gradle test > org.pytorch.serve.util.ConfigManagerTest.testNoEnvVars PASSED Gradle suite > Gradle test > org.pytorch.serve.CoverageTest.test PASSED Gradle suite > Gradle test > org.pytorch.serve.ModelServerTest.test STANDARD_OUT 2020-02-15 20:16:20,217 [INFO ] pool-1-thread-5 ACCESS_LOG - /127.0.0.1:37478 "GET /ping HTTP/1.1" 200 6 2020-02-15 20:16:20,217 [INFO ] pool-1-thread-5 TS_METRICS - Requests2XX.Count:1|#Level:Host|#hostname:ip-172-31-17-115,timestamp:null 2020-02-15 20:16:20,246 [INFO ] epollEventLoopGroup-3-1 ACCESS_LOG - /127.0.0.1:37478 "OPTIONS / HTTP/1.1" 200 17 2020-02-15 20:16:20,246 [INFO ] epollEventLoopGroup-3-1 TS_METRICS - Requests2XX.Count:1|#Level:Host|#hostname:ip-172-31-17-115,timestamp:null 2020-02-15 20:16:20,273 [INFO ] epollEventLoopGroup-3-2 ACCESS_LOG - 0.0.0.0 "OPTIONS / HTTP/1.1" 200 7 2020-02-15 20:16:20,273 [INFO ] epollEventLoopGroup-3-2 TS_METRICS - Requests2XX.Count:1|#Level:Host|#hostname:ip-172-31-17-115,timestamp:null 2020-02-15 20:16:20,283 [INFO ] epollEventLoopGroup-3-1 ACCESS_LOG - /127.0.0.1:37478 "GET /api-description HTTP/1.1" 200 4 2020-02-15 20:16:20,284 [INFO ] epollEventLoopGroup-3-1 TS_METRICS - Requests2XX.Count:1|#Level:Host|#hostname:ip-172-31-17-115,timestamp:null 2020-02-15 20:16:20,297 [INFO ] epollEventLoopGroup-3-1 ACCESS_LOG - /127.0.0.1:37478 "OPTIONS /predictions/noop HTTP/1.1" 200 3 2020-02-15 20:16:20,297 [INFO ] epollEventLoopGroup-3-1 TS_METRICS - Requests2XX.Count:1|#Level:Host|#hostname:ip-172-31-17-115,timestamp:null 2020-02-15 20:16:20,300 [DEBUG] epollEventLoopGroup-3-2 org.pytorch.serve.wlm.ModelVersionedRefs - Removed model: noop version: 1.11 2020-02-15 20:16:20,300 [DEBUG] epollEventLoopGroup-3-2 org.pytorch.serve.wlm.WorkerThread - W-9003-noop_1.11 State change WORKER_MODEL_LOADED -> WORKER_SCALED_DOWN 2020-02-15 20:16:20,301 [INFO ] epollEventLoopGroup-4-2 org.pytorch.serve.wlm.WorkerThread - 9003 Worker disconnected. WORKER_SCALED_DOWN 2020-02-15 20:16:20,302 [WARN ] W-9003-noop_1.11 org.pytorch.serve.wlm.WorkerThread - Backend worker thread exception. java.lang.IllegalMonitorStateException at java.util.concurrent.locks.ReentrantLock$Sync.tryRelease(ReentrantLock.java:151) at java.util.concurrent.locks.AbstractQueuedSynchronizer.release(AbstractQueuedSynchronizer.java:1261) at java.util.concurrent.locks.ReentrantLock.unlock(ReentrantLock.java:457) at org.pytorch.serve.wlm.Model.pollBatch(Model.java:175) at org.pytorch.serve.wlm.BatchAggregator.getRequest(BatchAggregator.java:33) at org.pytorch.serve.wlm.WorkerThread.run(WorkerThread.java:123) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) 2020-02-15 20:16:20,302 [DEBUG] epollEventLoopGroup-3-2 org.pytorch.serve.wlm.WorkerThread - W-9002-noop_1.11 State change WORKER_MODEL_LOADED -> WORKER_SCALED_DOWN 2020-02-15 20:16:20,303 [DEBUG] W-9003-noop_1.11 org.pytorch.serve.wlm.WorkerThread - W-9003-noop_1.11 State change WORKER_SCALED_DOWN -> WORKER_STOPPED 2020-02-15 20:16:20,304 [WARN ] W-9002-noop_1.11 org.pytorch.serve.wlm.WorkerThread - Backend worker thread exception. java.lang.IllegalMonitorStateException at java.util.concurrent.locks.ReentrantLock$Sync.tryRelease(ReentrantLock.java:151) at java.util.concurrent.locks.AbstractQueuedSynchronizer.release(AbstractQueuedSynchronizer.java:1261) at java.util.concurrent.locks.ReentrantLock.unlock(ReentrantLock.java:457) at org.pytorch.serve.wlm.Model.pollBatch(Model.java:175) at org.pytorch.serve.wlm.BatchAggregator.getRequest(BatchAggregator.java:33) at org.pytorch.serve.wlm.WorkerThread.run(WorkerThread.java:123) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) 2020-02-15 20:16:20,304 [INFO ] epollEventLoopGroup-4-1 org.pytorch.serve.wlm.WorkerThread - 9002 Worker disconnected. WORKER_SCALED_DOWN 2020-02-15 20:16:20,304 [DEBUG] W-9003-noop_1.11 org.pytorch.serve.wlm.WorkerThread - Worker terminated due to scale-down call. 2020-02-15 20:16:20,304 [DEBUG] W-9002-noop_1.11 org.pytorch.serve.wlm.WorkerThread - W-9002-noop_1.11 State change WORKER_SCALED_DOWN -> WORKER_STOPPED 2020-02-15 20:16:20,304 [DEBUG] W-9002-noop_1.11 org.pytorch.serve.wlm.WorkerThread - Worker terminated due to scale-down call. 2020-02-15 20:16:20,305 [DEBUG] epollEventLoopGroup-3-2 org.pytorch.serve.wlm.WorkerThread - W-9001-noop_1.11 State change WORKER_MODEL_LOADED -> WORKER_SCALED_DOWN 2020-02-15 20:16:20,305 [WARN ] W-9001-noop_1.11 org.pytorch.serve.wlm.WorkerThread - Backend worker thread exception. java.lang.IllegalMonitorStateException at java.util.concurrent.locks.ReentrantLock$Sync.tryRelease(ReentrantLock.java:151) at java.util.concurrent.locks.AbstractQueuedSynchronizer.release(AbstractQueuedSynchronizer.java:1261) at java.util.concurrent.locks.ReentrantLock.unlock(ReentrantLock.java:457) at org.pytorch.serve.wlm.Model.pollBatch(Model.java:175) at org.pytorch.serve.wlm.BatchAggregator.getRequest(BatchAggregator.java:33) at org.pytorch.serve.wlm.WorkerThread.run(WorkerThread.java:123) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) 2020-02-15 20:16:20,305 [INFO ] epollEventLoopGroup-4-3 org.pytorch.serve.wlm.WorkerThread - 9001 Worker disconnected. WORKER_SCALED_DOWN 2020-02-15 20:16:20,305 [DEBUG] W-9001-noop_1.11 org.pytorch.serve.wlm.WorkerThread - W-9001-noop_1.11 State change WORKER_SCALED_DOWN -> WORKER_STOPPED 2020-02-15 20:16:20,305 [DEBUG] W-9001-noop_1.11 org.pytorch.serve.wlm.WorkerThread - Worker terminated due to scale-down call. 2020-02-15 20:16:20,306 [DEBUG] epollEventLoopGroup-3-2 org.pytorch.serve.wlm.WorkerThread - W-9000-noop_1.11 State change WORKER_MODEL_LOADED -> WORKER_SCALED_DOWN 2020-02-15 20:16:20,306 [DEBUG] W-9000-noop_1.11 org.pytorch.serve.wlm.WorkerThread - Shutting down the thread .. Scaling down. 2020-02-15 20:16:20,306 [INFO ] epollEventLoopGroup-4-4 org.pytorch.serve.wlm.WorkerThread - 9000 Worker disconnected. WORKER_SCALED_DOWN 2020-02-15 20:16:20,306 [DEBUG] W-9000-noop_1.11 org.pytorch.serve.wlm.WorkerThread - W-9000-noop_1.11 State change WORKER_SCALED_DOWN -> WORKER_STOPPED 2020-02-15 20:16:20,307 [DEBUG] W-9000-noop_1.11 org.pytorch.serve.wlm.WorkerThread - Worker terminated due to scale-down call. 2020-02-15 20:16:20,310 [INFO ] epollEventLoopGroup-3-2 org.pytorch.serve.wlm.ModelManager - Model noop unregistered. 2020-02-15 20:16:20,310 [INFO ] epollEventLoopGroup-3-2 ACCESS_LOG - 0.0.0.0 "DELETE /models/noop HTTP/1.1" 200 10 2020-02-15 20:16:20,310 [INFO ] epollEventLoopGroup-3-2 TS_METRICS - Requests2XX.Count:1|#Level:Host|#hostname:ip-172-31-17-115,timestamp:null 2020-02-15 20:16:20,313 [DEBUG] epollEventLoopGroup-3-2 org.pytorch.serve.wlm.ModelVersionedRefs - Adding new version 1.11 for model noop_v1.0 2020-02-15 20:16:20,314 [DEBUG] epollEventLoopGroup-3-2 org.pytorch.serve.wlm.ModelVersionedRefs - Setting default version to 1.11 for model noop_v1.0 2020-02-15 20:16:20,314 [INFO ] epollEventLoopGroup-3-2 org.pytorch.serve.wlm.ModelManager - Model noop_v1.0 loaded. 2020-02-15 20:16:20,314 [INFO ] epollEventLoopGroup-3-2 ACCESS_LOG - 0.0.0.0 "POST /models?url=noop.mar&model_name=noop_v1.0&runtime=python&synchronous=false HTTP/1.1" 200 2 2020-02-15 20:16:20,314 [INFO ] epollEventLoopGroup-3-2 TS_METRICS - Requests2XX.Count:1|#Level:Host|#hostname:ip-172-31-17-115,timestamp:null 2020-02-15 20:16:20,318 [DEBUG] epollEventLoopGroup-3-2 org.pytorch.serve.wlm.ModelManager - updateModel: noop_v1.0, count: 1 2020-02-15 20:16:20,385 [INFO ] W-9004-noop_v1.0_1.11-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Listening on port: /tmp/.ts.sock.9004 2020-02-15 20:16:20,385 [INFO ] W-9004-noop_v1.0_1.11-stdout org.pytorch.serve.wlm.WorkerLifeCycle - [PID]5104 2020-02-15 20:16:20,385 [INFO ] W-9004-noop_v1.0_1.11-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Torch worker started. 2020-02-15 20:16:20,385 [DEBUG] W-9004-noop_v1.0_1.11 org.pytorch.serve.wlm.WorkerThread - W-9004-noop_v1.0_1.11 State change null -> WORKER_STARTED 2020-02-15 20:16:20,385 [INFO ] W-9004-noop_v1.0_1.11-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Python runtime: 3.8.1 2020-02-15 20:16:20,385 [INFO ] W-9004-noop_v1.0_1.11 org.pytorch.serve.wlm.WorkerThread - Connecting to: /tmp/.ts.sock.9004 2020-02-15 20:16:20,386 [INFO ] W-9004-noop_v1.0_1.11-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Connection accepted: /tmp/.ts.sock.9004. 2020-02-15 20:16:20,389 [INFO ] W-9004-noop_v1.0_1.11 org.pytorch.serve.wlm.WorkerThread - Backend response time: 3 2020-02-15 20:16:20,389 [INFO ] W-9004-noop_v1.0_1.11 ACCESS_LOG - 0.0.0.0 "PUT /models/noop_v1.0?synchronous=true&min_worker=1" HTTP/1.1" 200 71 2020-02-15 20:16:20,389 [INFO ] W-9004-noop_v1.0_1.11 TS_METRICS - Requests2XX.Count:1|#Level:Host|#hostname:ip-172-31-17-115,timestamp:null 2020-02-15 20:16:20,389 [DEBUG] W-9004-noop_v1.0_1.11 org.pytorch.serve.wlm.WorkerThread - W-9004-noop_v1.0_1.11 State change WORKER_STARTED -> WORKER_MODEL_LOADED 2020-02-15 20:16:20,389 [INFO ] W-9004-noop_v1.0_1.11 TS_METRICS - W-9004-noop_v1.0_1.11.ms:71|#Level:Host|#hostname:ip-172-31-17-115,timestamp:1581797780 2020-02-15 20:16:20,391 [DEBUG] epollEventLoopGroup-3-2 org.pytorch.serve.wlm.ModelManager - updateModel: noop_v1.0, count: 2 2020-02-15 20:16:20,392 [INFO ] epollEventLoopGroup-3-2 ACCESS_LOG - 0.0.0.0 "PUT /models/noop_v1.0?min_worker=2 HTTP/1.1" 202 1 2020-02-15 20:16:20,392 [INFO ] epollEventLoopGroup-3-2 TS_METRICS - Requests2XX.Count:1|#Level:Host|#hostname:ip-172-31-17-115,timestamp:null 2020-02-15 20:16:20,395 [INFO ] epollEventLoopGroup-3-2 ACCESS_LOG - 0.0.0.0 "GET /models?limit=200&nextPageToken=X HTTP/1.1" 200 1 2020-02-15 20:16:20,395 [INFO ] epollEventLoopGroup-3-2 TS_METRICS - Requests2XX.Count:1|#Level:Host|#hostname:ip-172-31-17-115,timestamp:null 2020-02-15 20:16:20,401 [INFO ] epollEventLoopGroup-3-2 ACCESS_LOG - 0.0.0.0 "GET /models/noop_v1.0 HTTP/1.1" 200 4 2020-02-15 20:16:20,401 [INFO ] epollEventLoopGroup-3-2 TS_METRICS - Requests2XX.Count:1|#Level:Host|#hostname:ip-172-31-17-115,timestamp:null 2020-02-15 20:16:20,409 [INFO ] epollEventLoopGroup-3-2 org.pytorch.serve.archive.ModelArchive - model folder already exists: 29385dfc880480adb4ff8a26d9461d72539fb887 2020-02-15 20:16:20,409 [DEBUG] epollEventLoopGroup-3-2 org.pytorch.serve.wlm.ModelVersionedRefs - Adding new version 1.11 for model noop 2020-02-15 20:16:20,409 [DEBUG] epollEventLoopGroup-3-2 org.pytorch.serve.wlm.ModelVersionedRefs - Setting default version to 1.11 for model noop 2020-02-15 20:16:20,409 [INFO ] epollEventLoopGroup-3-2 org.pytorch.serve.wlm.ModelManager - Model noop loaded. 2020-02-15 20:16:20,410 [DEBUG] epollEventLoopGroup-3-2 org.pytorch.serve.wlm.ModelManager - updateModel: noop, count: 1 2020-02-15 20:16:20,457 [INFO ] W-9005-noop_v1.0_1.11-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Listening on port: /tmp/.ts.sock.9005 2020-02-15 20:16:20,457 [INFO ] W-9005-noop_v1.0_1.11-stdout org.pytorch.serve.wlm.WorkerLifeCycle - [PID]5109 2020-02-15 20:16:20,457 [INFO ] W-9005-noop_v1.0_1.11-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Torch worker started. 2020-02-15 20:16:20,457 [INFO ] W-9005-noop_v1.0_1.11-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Python runtime: 3.8.1 2020-02-15 20:16:20,457 [DEBUG] W-9005-noop_v1.0_1.11 org.pytorch.serve.wlm.WorkerThread - W-9005-noop_v1.0_1.11 State change null -> WORKER_STARTED 2020-02-15 20:16:20,457 [INFO ] W-9005-noop_v1.0_1.11 org.pytorch.serve.wlm.WorkerThread - Connecting to: /tmp/.ts.sock.9005 2020-02-15 20:16:20,458 [INFO ] W-9005-noop_v1.0_1.11-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Connection accepted: /tmp/.ts.sock.9005. 2020-02-15 20:16:20,459 [INFO ] W-9005-noop_v1.0_1.11 org.pytorch.serve.wlm.WorkerThread - Backend response time: 0 2020-02-15 20:16:20,459 [DEBUG] W-9005-noop_v1.0_1.11 org.pytorch.serve.wlm.WorkerThread - W-9005-noop_v1.0_1.11 State change WORKER_STARTED -> WORKER_MODEL_LOADED 2020-02-15 20:16:20,459 [INFO ] W-9005-noop_v1.0_1.11 TS_METRICS - W-9005-noop_v1.0_1.11.ms:68|#Level:Host|#hostname:ip-172-31-17-115,timestamp:1581797780 2020-02-15 20:16:20,476 [INFO ] W-9006-noop_1.11-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Listening on port: /tmp/.ts.sock.9006 2020-02-15 20:16:20,476 [INFO ] W-9006-noop_1.11-stdout org.pytorch.serve.wlm.WorkerLifeCycle - [PID]5112 2020-02-15 20:16:20,476 [INFO ] W-9006-noop_1.11-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Torch worker started. 2020-02-15 20:16:20,476 [INFO ] W-9006-noop_1.11-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Python runtime: 3.8.1 2020-02-15 20:16:20,476 [DEBUG] W-9006-noop_1.11 org.pytorch.serve.wlm.WorkerThread - W-9006-noop_1.11 State change null -> WORKER_STARTED 2020-02-15 20:16:20,476 [INFO ] W-9006-noop_1.11 org.pytorch.serve.wlm.WorkerThread - Connecting to: /tmp/.ts.sock.9006 2020-02-15 20:16:20,477 [INFO ] W-9006-noop_1.11-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Connection accepted: /tmp/.ts.sock.9006. 2020-02-15 20:16:20,478 [INFO ] W-9006-noop_1.11 org.pytorch.serve.wlm.WorkerThread - Backend response time: 0 2020-02-15 20:16:20,479 [INFO ] W-9006-noop_1.11 ACCESS_LOG - 0.0.0.0 "POST /models?url=noop.mar&model_name=noop&runtime=python&initial_workers=1&synchronous=true HTTP/1.1" 200 71 2020-02-15 20:16:20,479 [INFO ] W-9006-noop_1.11 TS_METRICS - Requests2XX.Count:1|#Level:Host|#hostname:ip-172-31-17-115,timestamp:null 2020-02-15 20:16:20,479 [DEBUG] W-9006-noop_1.11 org.pytorch.serve.wlm.WorkerThread - W-9006-noop_1.11 State change WORKER_STARTED -> WORKER_MODEL_LOADED 2020-02-15 20:16:20,479 [INFO ] W-9006-noop_1.11 TS_METRICS - W-9006-noop_1.11.ms:69|#Level:Host|#hostname:ip-172-31-17-115,timestamp:1581797780 2020-02-15 20:16:20,481 [INFO ] epollEventLoopGroup-3-2 org.pytorch.serve.archive.ModelArchive - model folder already exists: 29385dfc880480adb4ff8a26d9461d72539fb887 2020-02-15 20:16:20,482 [DEBUG] epollEventLoopGroup-3-2 org.pytorch.serve.wlm.ModelVersionedRefs - Adding new version 1.11 for model noopversioned 2020-02-15 20:16:20,482 [DEBUG] epollEventLoopGroup-3-2 org.pytorch.serve.wlm.ModelVersionedRefs - Setting default version to 1.11 for model noopversioned 2020-02-15 20:16:20,482 [INFO ] epollEventLoopGroup-3-2 org.pytorch.serve.wlm.ModelManager - Model noopversioned loaded. 2020-02-15 20:16:20,482 [DEBUG] epollEventLoopGroup-3-2 org.pytorch.serve.wlm.ModelManager - updateModel: noopversioned, count: 1 2020-02-15 20:16:20,548 [INFO ] W-9007-noopversioned_1.11-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Listening on port: /tmp/.ts.sock.9007 2020-02-15 20:16:20,548 [INFO ] W-9007-noopversioned_1.11-stdout org.pytorch.serve.wlm.WorkerLifeCycle - [PID]5119 2020-02-15 20:16:20,548 [INFO ] W-9007-noopversioned_1.11-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Torch worker started. 2020-02-15 20:16:20,548 [DEBUG] W-9007-noopversioned_1.11 org.pytorch.serve.wlm.WorkerThread - W-9007-noopversioned_1.11 State change null -> WORKER_STARTED 2020-02-15 20:16:20,548 [INFO ] W-9007-noopversioned_1.11-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Python runtime: 3.8.1 2020-02-15 20:16:20,548 [INFO ] W-9007-noopversioned_1.11 org.pytorch.serve.wlm.WorkerThread - Connecting to: /tmp/.ts.sock.9007 2020-02-15 20:16:20,549 [INFO ] W-9007-noopversioned_1.11-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Connection accepted: /tmp/.ts.sock.9007. 2020-02-15 20:16:20,550 [INFO ] W-9007-noopversioned_1.11 org.pytorch.serve.wlm.WorkerThread - Backend response time: 0 2020-02-15 20:16:20,551 [INFO ] W-9007-noopversioned_1.11 ACCESS_LOG - 0.0.0.0 "POST /models?url=noop.mar&model_name=noopversioned&runtime=python&initial_workers=1&synchronous=true HTTP/1.1" 200 71 2020-02-15 20:16:20,551 [INFO ] W-9007-noopversioned_1.11 TS_METRICS - Requests2XX.Count:1|#Level:Host|#hostname:ip-172-31-17-115,timestamp:null 2020-02-15 20:16:20,551 [DEBUG] W-9007-noopversioned_1.11 org.pytorch.serve.wlm.WorkerThread - W-9007-noopversioned_1.11 State change WORKER_STARTED -> WORKER_MODEL_LOADED 2020-02-15 20:16:20,551 [INFO ] W-9007-noopversioned_1.11 TS_METRICS - W-9007-noopversioned_1.11.ms:69|#Level:Host|#hostname:ip-172-31-17-115,timestamp:1581797780 2020-02-15 20:16:20,556 [DEBUG] epollEventLoopGroup-3-2 org.pytorch.serve.wlm.ModelVersionedRefs - Adding new version 1.21 for model noopversioned 2020-02-15 20:16:20,556 [DEBUG] epollEventLoopGroup-3-2 org.pytorch.serve.wlm.ModelVersionedRefs - Setting default version to 1.21 for model noopversioned 2020-02-15 20:16:20,556 [INFO ] epollEventLoopGroup-3-2 org.pytorch.serve.wlm.ModelManager - Model noopversioned loaded. 2020-02-15 20:16:20,556 [DEBUG] epollEventLoopGroup-3-2 org.pytorch.serve.wlm.ModelManager - updateModel: noopversioned, count: 1 2020-02-15 20:16:20,621 [INFO ] W-9008-noopversioned_1.21-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Listening on port: /tmp/.ts.sock.9008 2020-02-15 20:16:20,621 [INFO ] W-9008-noopversioned_1.21-stdout org.pytorch.serve.wlm.WorkerLifeCycle - [PID]5124 2020-02-15 20:16:20,621 [INFO ] W-9008-noopversioned_1.21-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Torch worker started. 2020-02-15 20:16:20,621 [DEBUG] W-9008-noopversioned_1.21 org.pytorch.serve.wlm.WorkerThread - W-9008-noopversioned_1.21 State change null -> WORKER_STARTED 2020-02-15 20:16:20,621 [INFO ] W-9008-noopversioned_1.21-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Python runtime: 3.8.1 2020-02-15 20:16:20,621 [INFO ] W-9008-noopversioned_1.21 org.pytorch.serve.wlm.WorkerThread - Connecting to: /tmp/.ts.sock.9008 2020-02-15 20:16:20,622 [INFO ] W-9008-noopversioned_1.21-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Connection accepted: /tmp/.ts.sock.9008. 2020-02-15 20:16:20,625 [INFO ] W-9008-noopversioned_1.21 org.pytorch.serve.wlm.WorkerThread - Backend response time: 2 2020-02-15 20:16:20,625 [INFO ] W-9008-noopversioned_1.21 ACCESS_LOG - 0.0.0.0 "POST /models?url=noop_v2.mar&model_name=noopversioned&runtime=python&initial_workers=1&synchronous=true HTTP/1.1" 200 70 2020-02-15 20:16:20,626 [INFO ] W-9008-noopversioned_1.21 TS_METRICS - Requests2XX.Count:1|#Level:Host|#hostname:ip-172-31-17-115,timestamp:null 2020-02-15 20:16:20,626 [DEBUG] W-9008-noopversioned_1.21 org.pytorch.serve.wlm.WorkerThread - W-9008-noopversioned_1.21 State change WORKER_STARTED -> WORKER_MODEL_LOADED 2020-02-15 20:16:20,626 [INFO ] W-9008-noopversioned_1.21 TS_METRICS - W-9008-noopversioned_1.21.ms:70|#Level:Host|#hostname:ip-172-31-17-115,timestamp:1581797780 2020-02-15 20:16:20,628 [INFO ] epollEventLoopGroup-3-2 ACCESS_LOG - 0.0.0.0 "GET /models/noopversioned HTTP/1.1" 200 1 2020-02-15 20:16:20,628 [INFO ] epollEventLoopGroup-3-2 TS_METRICS - Requests2XX.Count:1|#Level:Host|#hostname:ip-172-31-17-115,timestamp:null 2020-02-15 20:16:20,631 [INFO ] epollEventLoopGroup-3-2 ACCESS_LOG - 0.0.0.0 "GET /models/noopversioned/all HTTP/1.1" 200 1 2020-02-15 20:16:20,631 [INFO ] epollEventLoopGroup-3-2 TS_METRICS - Requests2XX.Count:1|#Level:Host|#hostname:ip-172-31-17-115,timestamp:null 2020-02-15 20:16:20,634 [INFO ] epollEventLoopGroup-3-2 ACCESS_LOG - 0.0.0.0 "GET /models/noopversioned/1.11 HTTP/1.1" 200 1 2020-02-15 20:16:20,634 [INFO ] epollEventLoopGroup-3-2 TS_METRICS - Requests2XX.Count:1|#Level:Host|#hostname:ip-172-31-17-115,timestamp:null 2020-02-15 20:16:20,649 [INFO ] W-9008-noopversioned_1.21 org.pytorch.serve.wlm.WorkerThread - Backend response time: 0 2020-02-15 20:16:20,649 [INFO ] W-9008-noopversioned_1.21-stdout MODEL_METRICS - PreprocessTime.Milliseconds:0.0|#ModelName:noopversioned,Level:Model|#hostname:ip-172-31-17-115,requestID:5ab63b2a-06ae-441e-bcce-c9a9565f0ecf,timestamp:1581797780 2020-02-15 20:16:20,649 [INFO ] W-9008-noopversioned_1.21-stdout MODEL_METRICS - InferenceTime.Milliseconds:0.0|#ModelName:noopversioned,Level:Model|#hostname:ip-172-31-17-115,requestID:5ab63b2a-06ae-441e-bcce-c9a9565f0ecf,timestamp:1581797780 2020-02-15 20:16:20,649 [INFO ] W-9008-noopversioned_1.21 ACCESS_LOG - /127.0.0.1:37478 "POST /predictions/noopversioned/1.21 HTTP/1.1" 200 12 2020-02-15 20:16:20,649 [INFO ] W-9008-noopversioned_1.21 TS_METRICS - Requests2XX.Count:1|#Level:Host|#hostname:ip-172-31-17-115,timestamp:null 2020-02-15 20:16:20,649 [INFO ] W-9008-noopversioned_1.21-stdout MODEL_METRICS - PostprocessTime.Milliseconds:0.0|#ModelName:noopversioned,Level:Model|#hostname:ip-172-31-17-115,requestID:5ab63b2a-06ae-441e-bcce-c9a9565f0ecf,timestamp:1581797780 2020-02-15 20:16:20,650 [INFO ] W-9008-noopversioned_1.21-stdout MODEL_METRICS - PredictionTime.Milliseconds:0.08|#ModelName:noopversioned,Level:Model|#hostname:ip-172-31-17-115,requestID:5ab63b2a-06ae-441e-bcce-c9a9565f0ecf,timestamp:1581797780 2020-02-15 20:16:20,650 [DEBUG] W-9008-noopversioned_1.21 org.pytorch.serve.wlm.Job - Waiting time: 0, Backend time: 2 2020-02-15 20:16:20,652 [DEBUG] epollEventLoopGroup-3-2 org.pytorch.serve.wlm.ModelVersionedRefs - Setting default version to 1.11 for model noopversioned 2020-02-15 20:16:20,652 [INFO ] epollEventLoopGroup-3-2 ACCESS_LOG - 0.0.0.0 "PUT /models/noopversioned/1.11/set-default HTTP/1.1" 200 0 2020-02-15 20:16:20,652 [INFO ] epollEventLoopGroup-3-2 TS_METRICS - Requests2XX.Count:1|#Level:Host|#hostname:ip-172-31-17-115,timestamp:null 2020-02-15 20:16:20,654 [DEBUG] epollEventLoopGroup-3-2 org.pytorch.serve.wlm.ModelVersionedRefs - Removed model: noop version: 1.11 2020-02-15 20:16:20,654 [DEBUG] epollEventLoopGroup-3-2 org.pytorch.serve.wlm.WorkerThread - W-9006-noop_1.11 State change WORKER_MODEL_LOADED -> WORKER_SCALED_DOWN 2020-02-15 20:16:20,654 [DEBUG] W-9006-noop_1.11 org.pytorch.serve.wlm.WorkerThread - Shutting down the thread .. Scaling down. 2020-02-15 20:16:20,654 [DEBUG] W-9006-noop_1.11 org.pytorch.serve.wlm.WorkerThread - W-9006-noop_1.11 State change WORKER_SCALED_DOWN -> WORKER_STOPPED 2020-02-15 20:16:20,654 [INFO ] epollEventLoopGroup-4-7 org.pytorch.serve.wlm.WorkerThread - 9006 Worker disconnected. WORKER_SCALED_DOWN 2020-02-15 20:16:20,654 [DEBUG] W-9006-noop_1.11 org.pytorch.serve.wlm.WorkerThread - Worker terminated due to scale-down call. 2020-02-15 20:16:20,656 [INFO ] epollEventLoopGroup-3-2 org.pytorch.serve.wlm.ModelManager - Model noop unregistered. 2020-02-15 20:16:20,656 [INFO ] epollEventLoopGroup-3-2 ACCESS_LOG - 0.0.0.0 "DELETE /models/noop HTTP/1.1" 200 2 2020-02-15 20:16:20,656 [INFO ] epollEventLoopGroup-3-2 TS_METRICS - Requests2XX.Count:1|#Level:Host|#hostname:ip-172-31-17-115,timestamp:null 2020-02-15 20:16:20,661 [DEBUG] epollEventLoopGroup-3-2 org.pytorch.serve.wlm.ModelVersionedRefs - Adding new version 1.11 for model noop 2020-02-15 20:16:20,661 [DEBUG] epollEventLoopGroup-3-2 org.pytorch.serve.wlm.ModelVersionedRefs - Setting default version to 1.11 for model noop 2020-02-15 20:16:20,661 [INFO ] epollEventLoopGroup-3-2 org.pytorch.serve.wlm.ModelManager - Model noop loaded. 2020-02-15 20:16:20,661 [DEBUG] epollEventLoopGroup-3-2 org.pytorch.serve.wlm.ModelManager - updateModel: noop, count: 1 2020-02-15 20:16:20,728 [INFO ] W-9009-noop_1.11-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Listening on port: /tmp/.ts.sock.9009 2020-02-15 20:16:20,728 [INFO ] W-9009-noop_1.11-stdout org.pytorch.serve.wlm.WorkerLifeCycle - [PID]5129 2020-02-15 20:16:20,728 [INFO ] W-9009-noop_1.11-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Torch worker started. 2020-02-15 20:16:20,728 [INFO ] W-9009-noop_1.11-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Python runtime: 3.8.1 2020-02-15 20:16:20,728 [DEBUG] W-9009-noop_1.11 org.pytorch.serve.wlm.WorkerThread - W-9009-noop_1.11 State change null -> WORKER_STARTED 2020-02-15 20:16:20,728 [INFO ] W-9009-noop_1.11 org.pytorch.serve.wlm.WorkerThread - Connecting to: /tmp/.ts.sock.9009 2020-02-15 20:16:20,729 [INFO ] W-9009-noop_1.11-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Connection accepted: /tmp/.ts.sock.9009. 2020-02-15 20:16:20,732 [INFO ] W-9009-noop_1.11 org.pytorch.serve.wlm.WorkerThread - Backend response time: 2 2020-02-15 20:16:20,732 [INFO ] W-9009-noop_1.11 ACCESS_LOG - 0.0.0.0 "POST /models HTTP/1.1" 200 73 2020-02-15 20:16:20,732 [INFO ] W-9009-noop_1.11 TS_METRICS - Requests2XX.Count:1|#Level:Host|#hostname:ip-172-31-17-115,timestamp:null 2020-02-15 20:16:20,733 [DEBUG] W-9009-noop_1.11 org.pytorch.serve.wlm.WorkerThread - W-9009-noop_1.11 State change WORKER_STARTED -> WORKER_MODEL_LOADED 2020-02-15 20:16:20,733 [INFO ] W-9009-noop_1.11 TS_METRICS - W-9009-noop_1.11.ms:72|#Level:Host|#hostname:ip-172-31-17-115,timestamp:1581797780 2020-02-15 20:16:20,737 [INFO ] W-9009-noop_1.11-stdout MODEL_METRICS - PreprocessTime.Milliseconds:0.0|#ModelName:noop,Level:Model|#hostname:ip-172-31-17-115,requestID:ef4714a0-beb8-44ad-bfd4-54f11a9c7367,timestamp:1581797780 2020-02-15 20:16:20,737 [INFO ] W-9009-noop_1.11 org.pytorch.serve.wlm.WorkerThread - Backend response time: 1 2020-02-15 20:16:20,737 [INFO ] W-9009-noop_1.11-stdout MODEL_METRICS - InferenceTime.Milliseconds:0.0|#ModelName:noop,Level:Model|#hostname:ip-172-31-17-115,requestID:ef4714a0-beb8-44ad-bfd4-54f11a9c7367,timestamp:1581797780 2020-02-15 20:16:20,737 [INFO ] W-9009-noop_1.11 ACCESS_LOG - /127.0.0.1:37478 "POST /predictions/noop HTTP/1.1" 200 2 2020-02-15 20:16:20,737 [INFO ] W-9009-noop_1.11 TS_METRICS - Requests2XX.Count:1|#Level:Host|#hostname:ip-172-31-17-115,timestamp:null 2020-02-15 20:16:20,737 [INFO ] W-9009-noop_1.11-stdout MODEL_METRICS - PostprocessTime.Milliseconds:0.0|#ModelName:noop,Level:Model|#hostname:ip-172-31-17-115,requestID:ef4714a0-beb8-44ad-bfd4-54f11a9c7367,timestamp:1581797780 2020-02-15 20:16:20,737 [INFO ] W-9009-noop_1.11-stdout MODEL_METRICS - PredictionTime.Milliseconds:0.08|#ModelName:noop,Level:Model|#hostname:ip-172-31-17-115,requestID:ef4714a0-beb8-44ad-bfd4-54f11a9c7367,timestamp:1581797780 2020-02-15 20:16:20,737 [DEBUG] W-9009-noop_1.11 org.pytorch.serve.wlm.Job - Waiting time: 0, Backend time: 1 2020-02-15 20:16:20,740 [INFO ] W-9009-noop_1.11-stdout MODEL_METRICS - PreprocessTime.Milliseconds:0.0|#ModelName:noop,Level:Model|#hostname:ip-172-31-17-115,requestID:1e5d7258-8278-475b-a070-da97aa06d94d,timestamp:1581797780 2020-02-15 20:16:20,740 [INFO ] W-9009-noop_1.11-stdout MODEL_METRICS - InferenceTime.Milliseconds:0.0|#ModelName:noop,Level:Model|#hostname:ip-172-31-17-115,requestID:1e5d7258-8278-475b-a070-da97aa06d94d,timestamp:1581797780 2020-02-15 20:16:20,741 [INFO ] W-9009-noop_1.11 org.pytorch.serve.wlm.WorkerThread - Backend response time: 1 2020-02-15 20:16:20,741 [INFO ] W-9009-noop_1.11-stdout MODEL_METRICS - PostprocessTime.Milliseconds:0.0|#ModelName:noop,Level:Model|#hostname:ip-172-31-17-115,requestID:1e5d7258-8278-475b-a070-da97aa06d94d,timestamp:1581797780 2020-02-15 20:16:20,741 [INFO ] W-9009-noop_1.11 ACCESS_LOG - /127.0.0.1:37478 "POST /predictions/noop HTTP/1.1" 200 1 2020-02-15 20:16:20,741 [INFO ] W-9009-noop_1.11-stdout MODEL_METRICS - PredictionTime.Milliseconds:0.06|#ModelName:noop,Level:Model|#hostname:ip-172-31-17-115,requestID:1e5d7258-8278-475b-a070-da97aa06d94d,timestamp:1581797780 2020-02-15 20:16:20,741 [INFO ] W-9009-noop_1.11 TS_METRICS - Requests2XX.Count:1|#Level:Host|#hostname:ip-172-31-17-115,timestamp:null 2020-02-15 20:16:20,741 [DEBUG] W-9009-noop_1.11 org.pytorch.serve.wlm.Job - Waiting time: 0, Backend time: 1 2020-02-15 20:16:20,745 [INFO ] W-9009-noop_1.11-stdout MODEL_METRICS - PreprocessTime.Milliseconds:0.0|#ModelName:noop,Level:Model|#hostname:ip-172-31-17-115,requestID:23bd2aa2-db9f-4b99-8cc0-32524562a025,timestamp:1581797780 2020-02-15 20:16:20,745 [INFO ] W-9009-noop_1.11 org.pytorch.serve.wlm.WorkerThread - Backend response time: 1 2020-02-15 20:16:20,745 [INFO ] W-9009-noop_1.11-stdout MODEL_METRICS - InferenceTime.Milliseconds:0.0|#ModelName:noop,Level:Model|#hostname:ip-172-31-17-115,requestID:23bd2aa2-db9f-4b99-8cc0-32524562a025,timestamp:1581797780 2020-02-15 20:16:20,745 [INFO ] W-9009-noop_1.11 ACCESS_LOG - /127.0.0.1:37478 "POST /predictions/noop HTTP/1.1" 200 1 2020-02-15 20:16:20,745 [INFO ] W-9009-noop_1.11 TS_METRICS - Requests2XX.Count:1|#Level:Host|#hostname:ip-172-31-17-115,timestamp:null 2020-02-15 20:16:20,745 [INFO ] W-9009-noop_1.11-stdout MODEL_METRICS - PostprocessTime.Milliseconds:0.0|#ModelName:noop,Level:Model|#hostname:ip-172-31-17-115,requestID:23bd2aa2-db9f-4b99-8cc0-32524562a025,timestamp:1581797780 2020-02-15 20:16:20,745 [INFO ] W-9009-noop_1.11-stdout MODEL_METRICS - PredictionTime.Milliseconds:0.06|#ModelName:noop,Level:Model|#hostname:ip-172-31-17-115,requestID:23bd2aa2-db9f-4b99-8cc0-32524562a025,timestamp:1581797780 2020-02-15 20:16:20,745 [DEBUG] W-9009-noop_1.11 org.pytorch.serve.wlm.Job - Waiting time: 0, Backend time: 1 2020-02-15 20:16:20,749 [INFO ] W-9009-noop_1.11-stdout MODEL_METRICS - PreprocessTime.Milliseconds:0.0|#ModelName:noop,Level:Model|#hostname:ip-172-31-17-115,requestID:b3ed6fd9-7d69-49f6-a33a-98e3e7e586f2,timestamp:1581797780 2020-02-15 20:16:20,749 [INFO ] W-9009-noop_1.11-stdout MODEL_METRICS - InferenceTime.Milliseconds:0.0|#ModelName:noop,Level:Model|#hostname:ip-172-31-17-115,requestID:b3ed6fd9-7d69-49f6-a33a-98e3e7e586f2,timestamp:1581797780 2020-02-15 20:16:20,749 [INFO ] W-9009-noop_1.11 org.pytorch.serve.wlm.WorkerThread - Backend response time: 0 2020-02-15 20:16:20,749 [INFO ] W-9009-noop_1.11-stdout MODEL_METRICS - PostprocessTime.Milliseconds:0.0|#ModelName:noop,Level:Model|#hostname:ip-172-31-17-115,requestID:b3ed6fd9-7d69-49f6-a33a-98e3e7e586f2,timestamp:1581797780 2020-02-15 20:16:20,750 [INFO ] W-9009-noop_1.11 ACCESS_LOG - /127.0.0.1:37478 "POST /invocations?model_name=noop HTTP/1.1" 200 2 2020-02-15 20:16:20,750 [INFO ] W-9009-noop_1.11 TS_METRICS - Requests2XX.Count:1|#Level:Host|#hostname:ip-172-31-17-115,timestamp:null 2020-02-15 20:16:20,750 [INFO ] W-9009-noop_1.11-stdout MODEL_METRICS - PredictionTime.Milliseconds:0.06|#ModelName:noop,Level:Model|#hostname:ip-172-31-17-115,requestID:b3ed6fd9-7d69-49f6-a33a-98e3e7e586f2,timestamp:1581797780 2020-02-15 20:16:20,750 [DEBUG] W-9009-noop_1.11 org.pytorch.serve.wlm.Job - Waiting time: 0, Backend time: 1 2020-02-15 20:16:20,774 [INFO ] W-9004-noop_v1.0_1.11 org.pytorch.serve.wlm.WorkerThread - Backend response time: 0 2020-02-15 20:16:20,774 [INFO ] W-9004-noop_v1.0_1.11-stdout MODEL_METRICS - PreprocessTime.Milliseconds:0.0|#ModelName:noop_v1.0,Level:Model|#hostname:ip-172-31-17-115,requestID:f5ba78aa-80c0-4b05-92a5-0b56c2abe754,timestamp:1581797780 2020-02-15 20:16:20,774 [INFO ] W-9004-noop_v1.0_1.11 ACCESS_LOG - /127.0.0.1:37478 "POST /invocations HTTP/1.1" 200 11 2020-02-15 20:16:20,774 [INFO ] W-9004-noop_v1.0_1.11 TS_METRICS - Requests2XX.Count:1|#Level:Host|#hostname:ip-172-31-17-115,timestamp:null 2020-02-15 20:16:20,774 [INFO ] W-9004-noop_v1.0_1.11-stdout MODEL_METRICS - InferenceTime.Milliseconds:0.0|#ModelName:noop_v1.0,Level:Model|#hostname:ip-172-31-17-115,requestID:f5ba78aa-80c0-4b05-92a5-0b56c2abe754,timestamp:1581797780 2020-02-15 20:16:20,774 [DEBUG] W-9004-noop_v1.0_1.11 org.pytorch.serve.wlm.Job - Waiting time: 1, Backend time: 1 2020-02-15 20:16:20,774 [INFO ] W-9004-noop_v1.0_1.11-stdout MODEL_METRICS - PostprocessTime.Milliseconds:0.0|#ModelName:noop_v1.0,Level:Model|#hostname:ip-172-31-17-115,requestID:f5ba78aa-80c0-4b05-92a5-0b56c2abe754,timestamp:1581797780 2020-02-15 20:16:20,774 [INFO ] W-9004-noop_v1.0_1.11-stdout MODEL_METRICS - PredictionTime.Milliseconds:0.16|#ModelName:noop_v1.0,Level:Model|#hostname:ip-172-31-17-115,requestID:f5ba78aa-80c0-4b05-92a5-0b56c2abe754,timestamp:1581797780 2020-02-15 20:16:20,777 [INFO ] W-9009-noop_1.11 org.pytorch.serve.wlm.WorkerThread - Backend response time: 0 2020-02-15 20:16:20,777 [INFO ] W-9009-noop_1.11 ACCESS_LOG - /127.0.0.1:37478 "POST /models/noop/invoke HTTP/1.1" 200 1 2020-02-15 20:16:20,777 [INFO ] W-9009-noop_1.11-stdout MODEL_METRICS - PreprocessTime.Milliseconds:0.0|#ModelName:noop,Level:Model|#hostname:ip-172-31-17-115,requestID:8a1207e9-b0b7-461a-9f2e-237df1c4c96b,timestamp:1581797780 2020-02-15 20:16:20,777 [INFO ] W-9009-noop_1.11 TS_METRICS - Requests2XX.Count:1|#Level:Host|#hostname:ip-172-31-17-115,timestamp:null 2020-02-15 20:16:20,777 [DEBUG] W-9009-noop_1.11 org.pytorch.serve.wlm.Job - Waiting time: 0, Backend time: 1 2020-02-15 20:16:20,778 [INFO ] W-9009-noop_1.11-stdout MODEL_METRICS - InferenceTime.Milliseconds:0.0|#ModelName:noop,Level:Model|#hostname:ip-172-31-17-115,requestID:8a1207e9-b0b7-461a-9f2e-237df1c4c96b,timestamp:1581797780 2020-02-15 20:16:20,778 [INFO ] W-9009-noop_1.11-stdout MODEL_METRICS - PostprocessTime.Milliseconds:0.0|#ModelName:noop,Level:Model|#hostname:ip-172-31-17-115,requestID:8a1207e9-b0b7-461a-9f2e-237df1c4c96b,timestamp:1581797780 2020-02-15 20:16:20,778 [INFO ] W-9009-noop_1.11-stdout MODEL_METRICS - PredictionTime.Milliseconds:0.11|#ModelName:noop,Level:Model|#hostname:ip-172-31-17-115,requestID:8a1207e9-b0b7-461a-9f2e-237df1c4c96b,timestamp:1581797780 2020-02-15 20:16:20,784 [INFO ] W-9009-noop_1.11-stdout MODEL_METRICS - PreprocessTime.Milliseconds:0.0|#ModelName:noop,Level:Model|#hostname:ip-172-31-17-115,requestID:fa598967-161a-4d9f-9851-7ab044fd26cf,timestamp:1581797780 2020-02-15 20:16:20,784 [INFO ] W-9009-noop_1.11-stdout MODEL_METRICS - InferenceTime.Milliseconds:0.0|#ModelName:noop,Level:Model|#hostname:ip-172-31-17-115,requestID:fa598967-161a-4d9f-9851-7ab044fd26cf,timestamp:1581797780 2020-02-15 20:16:20,784 [INFO ] W-9009-noop_1.11 org.pytorch.serve.wlm.WorkerThread - Backend response time: 0 2020-02-15 20:16:20,784 [INFO ] W-9009-noop_1.11-stdout MODEL_METRICS - PostprocessTime.Milliseconds:0.0|#ModelName:noop,Level:Model|#hostname:ip-172-31-17-115,requestID:fa598967-161a-4d9f-9851-7ab044fd26cf,timestamp:1581797780 2020-02-15 20:16:20,784 [INFO ] W-9009-noop_1.11 ACCESS_LOG - /127.0.0.1:37478 "POST /models/noop/invoke HTTP/1.1" 200 2 2020-02-15 20:16:20,784 [INFO ] W-9009-noop_1.11 TS_METRICS - Requests2XX.Count:1|#Level:Host|#hostname:ip-172-31-17-115,timestamp:null 2020-02-15 20:16:20,784 [INFO ] W-9009-noop_1.11-stdout MODEL_METRICS - PredictionTime.Milliseconds:0.06|#ModelName:noop,Level:Model|#hostname:ip-172-31-17-115,requestID:fa598967-161a-4d9f-9851-7ab044fd26cf,timestamp:1581797780 2020-02-15 20:16:20,784 [DEBUG] W-9009-noop_1.11 org.pytorch.serve.wlm.Job - Waiting time: 0, Backend time: 1 2020-02-15 20:16:20,787 [INFO ] W-9009-noop_1.11-stdout MODEL_METRICS - PreprocessTime.Milliseconds:0.0|#ModelName:noop,Level:Model|#hostname:ip-172-31-17-115,requestID:72d6a316-6f66-4e4c-b224-96984be5e239,timestamp:1581797780 2020-02-15 20:16:20,787 [INFO ] W-9009-noop_1.11 org.pytorch.serve.wlm.WorkerThread - Backend response time: 0 2020-02-15 20:16:20,787 [INFO ] W-9009-noop_1.11-stdout MODEL_METRICS - InferenceTime.Milliseconds:0.0|#ModelName:noop,Level:Model|#hostname:ip-172-31-17-115,requestID:72d6a316-6f66-4e4c-b224-96984be5e239,timestamp:1581797780 2020-02-15 20:16:20,787 [INFO ] W-9009-noop_1.11-stdout MODEL_METRICS - PostprocessTime.Milliseconds:0.0|#ModelName:noop,Level:Model|#hostname:ip-172-31-17-115,requestID:72d6a316-6f66-4e4c-b224-96984be5e239,timestamp:1581797780 2020-02-15 20:16:20,787 [INFO ] W-9009-noop_1.11 ACCESS_LOG - /127.0.0.1:37478 "GET /noop/predict?data=test HTTP/1.1" 200 1 2020-02-15 20:16:20,787 [INFO ] W-9009-noop_1.11 TS_METRICS - Requests2XX.Count:1|#Level:Host|#hostname:ip-172-31-17-115,timestamp:null 2020-02-15 20:16:20,787 [INFO ] W-9009-noop_1.11-stdout MODEL_METRICS - PredictionTime.Milliseconds:0.06|#ModelName:noop,Level:Model|#hostname:ip-172-31-17-115,requestID:72d6a316-6f66-4e4c-b224-96984be5e239,timestamp:1581797780 2020-02-15 20:16:20,787 [DEBUG] W-9009-noop_1.11 org.pytorch.serve.wlm.Job - Waiting time: 0, Backend time: 1 2020-02-15 20:16:21,498 [INFO ] W-9009-noop_1.11 org.pytorch.serve.wlm.WorkerThread - Backend response time: 1 2020-02-15 20:16:21,498 [INFO ] W-9009-noop_1.11-stdout MODEL_METRICS - PreprocessTime.Milliseconds:0.0|#ModelName:noop,Level:Model|#hostname:ip-172-31-17-115,requestID:d914f119-bb5b-4d1e-8f83-7f1463569374,timestamp:1581797781 2020-02-15 20:16:21,498 [INFO ] W-9009-noop_1.11-stdout MODEL_METRICS - InferenceTime.Milliseconds:0.0|#ModelName:noop,Level:Model|#hostname:ip-172-31-17-115,requestID:d914f119-bb5b-4d1e-8f83-7f1463569374,timestamp:1581797781 2020-02-15 20:16:21,498 [INFO ] W-9009-noop_1.11-stdout MODEL_METRICS - PostprocessTime.Milliseconds:0.0|#ModelName:noop,Level:Model|#hostname:ip-172-31-17-115,requestID:d914f119-bb5b-4d1e-8f83-7f1463569374,timestamp:1581797781 2020-02-15 20:16:21,498 [INFO ] W-9009-noop_1.11-stdout MODEL_METRICS - PredictionTime.Milliseconds:0.1|#ModelName:noop,Level:Model|#hostname:ip-172-31-17-115,requestID:d914f119-bb5b-4d1e-8f83-7f1463569374,timestamp:1581797781 2020-02-15 20:16:21,498 [INFO ] W-9009-noop_1.11 ACCESS_LOG - /127.0.0.1:37478 "POST /predictions/noop HTTP/1.1" 200 27 2020-02-15 20:16:21,498 [INFO ] W-9009-noop_1.11 TS_METRICS - Requests2XX.Count:1|#Level:Host|#hostname:ip-172-31-17-115,timestamp:null 2020-02-15 20:16:21,498 [DEBUG] W-9009-noop_1.11 org.pytorch.serve.wlm.Job - Waiting time: 0, Backend time: 20 2020-02-15 20:16:21,501 [DEBUG] epollEventLoopGroup-3-2 org.pytorch.serve.wlm.ModelVersionedRefs - Adding new version 1.0 for model noop-config 2020-02-15 20:16:21,501 [DEBUG] epollEventLoopGroup-3-2 org.pytorch.serve.wlm.ModelVersionedRefs - Setting default version to 1.0 for model noop-config 2020-02-15 20:16:21,501 [INFO ] epollEventLoopGroup-3-2 org.pytorch.serve.wlm.ModelManager - Model noop-config loaded. 2020-02-15 20:16:21,501 [DEBUG] epollEventLoopGroup-3-2 org.pytorch.serve.wlm.ModelManager - updateModel: noop-config, count: 1 2020-02-15 20:16:21,593 [INFO ] W-9010-noop-config_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Listening on port: /tmp/.ts.sock.9010 2020-02-15 20:16:21,593 [INFO ] W-9010-noop-config_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - [PID]5136 2020-02-15 20:16:21,594 [INFO ] W-9010-noop-config_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Torch worker started. 2020-02-15 20:16:21,594 [INFO ] W-9010-noop-config_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Python runtime: 3.8.1 2020-02-15 20:16:21,594 [DEBUG] W-9010-noop-config_1.0 org.pytorch.serve.wlm.WorkerThread - W-9010-noop-config_1.0 State change null -> WORKER_STARTED 2020-02-15 20:16:21,594 [INFO ] W-9010-noop-config_1.0 org.pytorch.serve.wlm.WorkerThread - Connecting to: /tmp/.ts.sock.9010 2020-02-15 20:16:21,595 [INFO ] W-9010-noop-config_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Connection accepted: /tmp/.ts.sock.9010. 2020-02-15 20:16:21,598 [INFO ] W-9010-noop-config_1.0 org.pytorch.serve.wlm.WorkerThread - Backend response time: 3 2020-02-15 20:16:21,598 [INFO ] W-9010-noop-config_1.0 ACCESS_LOG - 0.0.0.0 "POST /models?url=noop-v1.0-config-tests.mar&model_name=noop-config&initial_workers=1&synchronous=true HTTP/1.1" 200 98 2020-02-15 20:16:21,598 [INFO ] W-9010-noop-config_1.0 TS_METRICS - Requests2XX.Count:1|#Level:Host|#hostname:ip-172-31-17-115,timestamp:null 2020-02-15 20:16:21,598 [DEBUG] W-9010-noop-config_1.0 org.pytorch.serve.wlm.WorkerThread - W-9010-noop-config_1.0 State change WORKER_STARTED -> WORKER_MODEL_LOADED 2020-02-15 20:16:21,598 [INFO ] W-9010-noop-config_1.0 TS_METRICS - W-9010-noop-config_1.0.ms:97|#Level:Host|#hostname:ip-172-31-17-115,timestamp:1581797781 2020-02-15 20:16:21,600 [INFO ] W-9010-noop-config_1.0 org.pytorch.serve.wlm.WorkerThread - Backend response time: 0 2020-02-15 20:16:21,600 [INFO ] W-9010-noop-config_1.0-stdout MODEL_METRICS - PreprocessTime.Milliseconds:0.0|#ModelName:noop-config,Level:Model|#hostname:ip-172-31-17-115,requestID:a369248a-25f5-41b2-8370-b7b3598313d5,timestamp:1581797781 2020-02-15 20:16:21,600 [INFO ] W-9010-noop-config_1.0 ACCESS_LOG - /127.0.0.1:37478 "POST /predictions/noop-config HTTP/1.1" 200 1 2020-02-15 20:16:21,601 [INFO ] W-9010-noop-config_1.0-stdout MODEL_METRICS - InferenceTime.Milliseconds:0.0|#ModelName:noop-config,Level:Model|#hostname:ip-172-31-17-115,requestID:a369248a-25f5-41b2-8370-b7b3598313d5,timestamp:1581797781 2020-02-15 20:16:21,601 [INFO ] W-9010-noop-config_1.0 TS_METRICS - Requests2XX.Count:1|#Level:Host|#hostname:ip-172-31-17-115,timestamp:null 2020-02-15 20:16:21,601 [INFO ] W-9010-noop-config_1.0-stdout MODEL_METRICS - PostprocessTime.Milliseconds:0.01|#ModelName:noop-config,Level:Model|#hostname:ip-172-31-17-115,requestID:a369248a-25f5-41b2-8370-b7b3598313d5,timestamp:1581797781 2020-02-15 20:16:21,601 [DEBUG] W-9010-noop-config_1.0 org.pytorch.serve.wlm.Job - Waiting time: 0, Backend time: 1 2020-02-15 20:16:21,601 [INFO ] W-9010-noop-config_1.0-stdout MODEL_METRICS - PredictionTime.Milliseconds:0.11|#ModelName:noop-config,Level:Model|#hostname:ip-172-31-17-115,requestID:a369248a-25f5-41b2-8370-b7b3598313d5,timestamp:1581797781 2020-02-15 20:16:21,602 [DEBUG] epollEventLoopGroup-3-2 org.pytorch.serve.wlm.ModelVersionedRefs - Removed model: noop-config version: 1.0 2020-02-15 20:16:21,602 [DEBUG] epollEventLoopGroup-3-2 org.pytorch.serve.wlm.WorkerThread - W-9010-noop-config_1.0 State change WORKER_MODEL_LOADED -> WORKER_SCALED_DOWN 2020-02-15 20:16:21,602 [DEBUG] W-9010-noop-config_1.0 org.pytorch.serve.wlm.WorkerThread - Shutting down the thread .. Scaling down. 2020-02-15 20:16:21,602 [INFO ] epollEventLoopGroup-4-11 org.pytorch.serve.wlm.WorkerThread - 9010 Worker disconnected. WORKER_SCALED_DOWN 2020-02-15 20:16:21,602 [DEBUG] W-9010-noop-config_1.0 org.pytorch.serve.wlm.WorkerThread - W-9010-noop-config_1.0 State change WORKER_SCALED_DOWN -> WORKER_STOPPED 2020-02-15 20:16:21,602 [DEBUG] W-9010-noop-config_1.0 org.pytorch.serve.wlm.WorkerThread - Worker terminated due to scale-down call. 2020-02-15 20:16:21,603 [INFO ] epollEventLoopGroup-3-2 org.pytorch.serve.wlm.ModelManager - Model noop-config unregistered. 2020-02-15 20:16:21,603 [INFO ] epollEventLoopGroup-3-2 ACCESS_LOG - 0.0.0.0 "DELETE /models/noop-config HTTP/1.1" 200 1 2020-02-15 20:16:21,603 [INFO ] epollEventLoopGroup-3-2 TS_METRICS - Requests2XX.Count:1|#Level:Host|#hostname:ip-172-31-17-115,timestamp:null 2020-02-15 20:16:21,606 [DEBUG] epollEventLoopGroup-3-2 org.pytorch.serve.wlm.ModelVersionedRefs - Adding new version 1.0 for model noop-config 2020-02-15 20:16:21,606 [DEBUG] epollEventLoopGroup-3-2 org.pytorch.serve.wlm.ModelVersionedRefs - Setting default version to 1.0 for model noop-config 2020-02-15 20:16:21,606 [INFO ] epollEventLoopGroup-3-2 org.pytorch.serve.wlm.ModelManager - Model noop-config loaded. 2020-02-15 20:16:21,606 [DEBUG] epollEventLoopGroup-3-2 org.pytorch.serve.wlm.ModelManager - updateModel: noop-config, count: 1 2020-02-15 20:16:21,671 [INFO ] W-9011-noop-config_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Listening on port: /tmp/.ts.sock.9011 2020-02-15 20:16:21,671 [INFO ] W-9011-noop-config_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - [PID]5142 2020-02-15 20:16:21,671 [INFO ] W-9011-noop-config_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Torch worker started. 2020-02-15 20:16:21,671 [INFO ] W-9011-noop-config_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Python runtime: 3.8.1 2020-02-15 20:16:21,671 [DEBUG] W-9011-noop-config_1.0 org.pytorch.serve.wlm.WorkerThread - W-9011-noop-config_1.0 State change null -> WORKER_STARTED 2020-02-15 20:16:21,671 [INFO ] W-9011-noop-config_1.0 org.pytorch.serve.wlm.WorkerThread - Connecting to: /tmp/.ts.sock.9011 2020-02-15 20:16:21,672 [INFO ] W-9011-noop-config_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Connection accepted: /tmp/.ts.sock.9011. 2020-02-15 20:16:21,675 [INFO ] W-9011-noop-config_1.0 org.pytorch.serve.wlm.WorkerThread - Backend response time: 3 2020-02-15 20:16:21,675 [INFO ] W-9011-noop-config_1.0 ACCESS_LOG - 0.0.0.0 "POST /models?url=noop-v1.0-config-tests.mar&model_name=noop-config&initial_workers=1&synchronous=true HTTP/1.1" 200 70 2020-02-15 20:16:21,675 [INFO ] W-9011-noop-config_1.0 TS_METRICS - Requests2XX.Count:1|#Level:Host|#hostname:ip-172-31-17-115,timestamp:null 2020-02-15 20:16:21,675 [DEBUG] W-9011-noop-config_1.0 org.pytorch.serve.wlm.WorkerThread - W-9011-noop-config_1.0 State change WORKER_STARTED -> WORKER_MODEL_LOADED 2020-02-15 20:16:21,675 [INFO ] W-9011-noop-config_1.0 TS_METRICS - W-9011-noop-config_1.0.ms:69|#Level:Host|#hostname:ip-172-31-17-115,timestamp:1581797781 2020-02-15 20:16:21,678 [INFO ] W-9011-noop-config_1.0 org.pytorch.serve.wlm.WorkerThread - Backend response time: 1 2020-02-15 20:16:21,678 [INFO ] W-9011-noop-config_1.0-stdout MODEL_METRICS - PreprocessTime.Milliseconds:0.0|#ModelName:noop-config,Level:Model|#hostname:ip-172-31-17-115,requestID:b10a7162-6eac-433b-91aa-2c7227291a6e,timestamp:1581797781 2020-02-15 20:16:21,678 [INFO ] W-9011-noop-config_1.0 ACCESS_LOG - /127.0.0.1:37478 "POST /predictions/noop-config HTTP/1.1" 200 2 2020-02-15 20:16:21,678 [INFO ] W-9011-noop-config_1.0 TS_METRICS - Requests2XX.Count:1|#Level:Host|#hostname:ip-172-31-17-115,timestamp:null 2020-02-15 20:16:21,678 [DEBUG] W-9011-noop-config_1.0 org.pytorch.serve.wlm.Job - Waiting time: 1, Backend time: 1 2020-02-15 20:16:21,678 [INFO ] W-9011-noop-config_1.0-stdout MODEL_METRICS - InferenceTime.Milliseconds:0.0|#ModelName:noop-config,Level:Model|#hostname:ip-172-31-17-115,requestID:b10a7162-6eac-433b-91aa-2c7227291a6e,timestamp:1581797781 2020-02-15 20:16:21,678 [INFO ] W-9011-noop-config_1.0-stdout MODEL_METRICS - PostprocessTime.Milliseconds:0.01|#ModelName:noop-config,Level:Model|#hostname:ip-172-31-17-115,requestID:b10a7162-6eac-433b-91aa-2c7227291a6e,timestamp:1581797781 2020-02-15 20:16:21,678 [INFO ] W-9011-noop-config_1.0-stdout MODEL_METRICS - PredictionTime.Milliseconds:0.1|#ModelName:noop-config,Level:Model|#hostname:ip-172-31-17-115,requestID:b10a7162-6eac-433b-91aa-2c7227291a6e,timestamp:1581797781 2020-02-15 20:16:21,679 [DEBUG] epollEventLoopGroup-3-2 org.pytorch.serve.wlm.ModelVersionedRefs - Removed model: noop-config version: 1.0 2020-02-15 20:16:21,679 [DEBUG] epollEventLoopGroup-3-2 org.pytorch.serve.wlm.WorkerThread - W-9011-noop-config_1.0 State change WORKER_MODEL_LOADED -> WORKER_SCALED_DOWN 2020-02-15 20:16:21,679 [DEBUG] W-9011-noop-config_1.0 org.pytorch.serve.wlm.WorkerThread - Shutting down the thread .. Scaling down. 2020-02-15 20:16:21,679 [INFO ] epollEventLoopGroup-4-12 org.pytorch.serve.wlm.WorkerThread - 9011 Worker disconnected. WORKER_SCALED_DOWN 2020-02-15 20:16:21,679 [DEBUG] W-9011-noop-config_1.0 org.pytorch.serve.wlm.WorkerThread - W-9011-noop-config_1.0 State change WORKER_SCALED_DOWN -> WORKER_STOPPED 2020-02-15 20:16:21,679 [DEBUG] W-9011-noop-config_1.0 org.pytorch.serve.wlm.WorkerThread - Worker terminated due to scale-down call. 2020-02-15 20:16:21,681 [INFO ] epollEventLoopGroup-3-2 org.pytorch.serve.wlm.ModelManager - Model noop-config unregistered. 2020-02-15 20:16:21,681 [INFO ] epollEventLoopGroup-3-2 ACCESS_LOG - 0.0.0.0 "DELETE /models/noop-config HTTP/1.1" 200 2 2020-02-15 20:16:21,681 [INFO ] epollEventLoopGroup-3-2 TS_METRICS - Requests2XX.Count:1|#Level:Host|#hostname:ip-172-31-17-115,timestamp:null 2020-02-15 20:16:21,683 [DEBUG] epollEventLoopGroup-3-2 org.pytorch.serve.wlm.ModelVersionedRefs - Adding new version 1.0 for model respheader 2020-02-15 20:16:21,683 [DEBUG] epollEventLoopGroup-3-2 org.pytorch.serve.wlm.ModelVersionedRefs - Setting default version to 1.0 for model respheader 2020-02-15 20:16:21,683 [INFO ] epollEventLoopGroup-3-2 org.pytorch.serve.wlm.ModelManager - Model respheader loaded. 2020-02-15 20:16:21,683 [DEBUG] epollEventLoopGroup-3-2 org.pytorch.serve.wlm.ModelManager - updateModel: respheader, count: 1 2020-02-15 20:16:21,748 [INFO ] W-9012-respheader_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Listening on port: /tmp/.ts.sock.9012 2020-02-15 20:16:21,748 [INFO ] W-9012-respheader_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - [PID]5147 2020-02-15 20:16:21,748 [INFO ] W-9012-respheader_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Torch worker started. 2020-02-15 20:16:21,748 [INFO ] W-9012-respheader_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Python runtime: 3.8.1 2020-02-15 20:16:21,748 [DEBUG] W-9012-respheader_1.0 org.pytorch.serve.wlm.WorkerThread - W-9012-respheader_1.0 State change null -> WORKER_STARTED 2020-02-15 20:16:21,748 [INFO ] W-9012-respheader_1.0 org.pytorch.serve.wlm.WorkerThread - Connecting to: /tmp/.ts.sock.9012 2020-02-15 20:16:21,749 [INFO ] W-9012-respheader_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Connection accepted: /tmp/.ts.sock.9012. 2020-02-15 20:16:21,752 [INFO ] W-9012-respheader_1.0 org.pytorch.serve.wlm.WorkerThread - Backend response time: 3 2020-02-15 20:16:21,752 [INFO ] W-9012-respheader_1.0 ACCESS_LOG - 0.0.0.0 "POST /models?url=respheader-test.mar&model_name=respheader&initial_workers=1&synchronous=true HTTP/1.1" 200 70 2020-02-15 20:16:21,752 [INFO ] W-9012-respheader_1.0 TS_METRICS - Requests2XX.Count:1|#Level:Host|#hostname:ip-172-31-17-115,timestamp:null 2020-02-15 20:16:21,752 [DEBUG] W-9012-respheader_1.0 org.pytorch.serve.wlm.WorkerThread - W-9012-respheader_1.0 State change WORKER_STARTED -> WORKER_MODEL_LOADED 2020-02-15 20:16:21,752 [INFO ] W-9012-respheader_1.0 TS_METRICS - W-9012-respheader_1.0.ms:69|#Level:Host|#hostname:ip-172-31-17-115,timestamp:1581797781 2020-02-15 20:16:21,754 [INFO ] W-9012-respheader_1.0 org.pytorch.serve.wlm.WorkerThread - Backend response time: 1 2020-02-15 20:16:21,754 [INFO ] W-9012-respheader_1.0-stdout MODEL_METRICS - PredictionTime.Milliseconds:0.03|#ModelName:respheader,Level:Model|#hostname:ip-172-31-17-115,requestID:d582407e-c307-4ccb-b2aa-45949ec4864f,timestamp:1581797781 2020-02-15 20:16:21,754 [INFO ] W-9012-respheader_1.0 ACCESS_LOG - /127.0.0.1:37478 "POST /predictions/respheader HTTP/1.1" 200 1 2020-02-15 20:16:21,754 [INFO ] W-9012-respheader_1.0 TS_METRICS - Requests2XX.Count:1|#Level:Host|#hostname:ip-172-31-17-115,timestamp:null 2020-02-15 20:16:21,754 [DEBUG] W-9012-respheader_1.0 org.pytorch.serve.wlm.Job - Waiting time: 0, Backend time: 1 2020-02-15 20:16:21,755 [DEBUG] epollEventLoopGroup-3-2 org.pytorch.serve.wlm.ModelVersionedRefs - Removed model: respheader version: 1.0 2020-02-15 20:16:21,755 [DEBUG] epollEventLoopGroup-3-2 org.pytorch.serve.wlm.WorkerThread - W-9012-respheader_1.0 State change WORKER_MODEL_LOADED -> WORKER_SCALED_DOWN 2020-02-15 20:16:21,755 [DEBUG] W-9012-respheader_1.0 org.pytorch.serve.wlm.WorkerThread - Shutting down the thread .. Scaling down. 2020-02-15 20:16:21,755 [DEBUG] W-9012-respheader_1.0 org.pytorch.serve.wlm.WorkerThread - W-9012-respheader_1.0 State change WORKER_SCALED_DOWN -> WORKER_STOPPED 2020-02-15 20:16:21,755 [INFO ] epollEventLoopGroup-4-13 org.pytorch.serve.wlm.WorkerThread - 9012 Worker disconnected. WORKER_SCALED_DOWN 2020-02-15 20:16:21,755 [DEBUG] W-9012-respheader_1.0 org.pytorch.serve.wlm.WorkerThread - Worker terminated due to scale-down call. 2020-02-15 20:16:21,757 [INFO ] epollEventLoopGroup-3-2 org.pytorch.serve.wlm.ModelManager - Model respheader unregistered. 2020-02-15 20:16:21,757 [INFO ] epollEventLoopGroup-3-2 ACCESS_LOG - 0.0.0.0 "DELETE /models/respheader HTTP/1.1" 200 2 2020-02-15 20:16:21,757 [INFO ] epollEventLoopGroup-3-2 TS_METRICS - Requests2XX.Count:1|#Level:Host|#hostname:ip-172-31-17-115,timestamp:null 2020-02-15 20:16:21,758 [DEBUG] epollEventLoopGroup-3-2 org.pytorch.serve.wlm.ModelVersionedRefs - Adding new version 1.0 for model nomanifest 2020-02-15 20:16:21,758 [DEBUG] epollEventLoopGroup-3-2 org.pytorch.serve.wlm.ModelVersionedRefs - Setting default version to 1.0 for model nomanifest 2020-02-15 20:16:21,758 [INFO ] epollEventLoopGroup-3-2 org.pytorch.serve.wlm.ModelManager - Model nomanifest loaded. 2020-02-15 20:16:21,758 [DEBUG] epollEventLoopGroup-3-2 org.pytorch.serve.wlm.ModelManager - updateModel: nomanifest, count: 1 2020-02-15 20:16:21,822 [INFO ] W-9013-nomanifest_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Listening on port: /tmp/.ts.sock.9013 2020-02-15 20:16:21,822 [INFO ] W-9013-nomanifest_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - [PID]5152 2020-02-15 20:16:21,822 [INFO ] W-9013-nomanifest_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Torch worker started. 2020-02-15 20:16:21,822 [INFO ] W-9013-nomanifest_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Python runtime: 3.8.1 2020-02-15 20:16:21,822 [DEBUG] W-9013-nomanifest_1.0 org.pytorch.serve.wlm.WorkerThread - W-9013-nomanifest_1.0 State change null -> WORKER_STARTED 2020-02-15 20:16:21,822 [INFO ] W-9013-nomanifest_1.0 org.pytorch.serve.wlm.WorkerThread - Connecting to: /tmp/.ts.sock.9013 2020-02-15 20:16:21,823 [INFO ] W-9013-nomanifest_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Connection accepted: /tmp/.ts.sock.9013. 2020-02-15 20:16:21,826 [INFO ] W-9013-nomanifest_1.0 org.pytorch.serve.wlm.WorkerThread - Backend response time: 3 2020-02-15 20:16:21,826 [INFO ] W-9013-nomanifest_1.0 ACCESS_LOG - 0.0.0.0 "POST /models?url=noop-no-manifest.mar&model_name=nomanifest&initial_workers=1&synchronous=true HTTP/1.1" 200 68 2020-02-15 20:16:21,826 [INFO ] W-9013-nomanifest_1.0 TS_METRICS - Requests2XX.Count:1|#Level:Host|#hostname:ip-172-31-17-115,timestamp:null 2020-02-15 20:16:21,826 [DEBUG] W-9013-nomanifest_1.0 org.pytorch.serve.wlm.WorkerThread - W-9013-nomanifest_1.0 State change WORKER_STARTED -> WORKER_MODEL_LOADED 2020-02-15 20:16:21,826 [INFO ] W-9013-nomanifest_1.0 TS_METRICS - W-9013-nomanifest_1.0.ms:67|#Level:Host|#hostname:ip-172-31-17-115,timestamp:1581797781 2020-02-15 20:16:21,827 [INFO ] W-9013-nomanifest_1.0 org.pytorch.serve.wlm.WorkerThread - Backend response time: 0 2020-02-15 20:16:21,827 [INFO ] W-9013-nomanifest_1.0-stdout MODEL_METRICS - PredictionTime.Milliseconds:0.01|#ModelName:nomanifest,Level:Model|#hostname:ip-172-31-17-115,requestID:ea0be406-e400-4840-abc9-31a7fb526eb3,timestamp:1581797781 2020-02-15 20:16:21,827 [INFO ] W-9013-nomanifest_1.0 ACCESS_LOG - /127.0.0.1:37478 "POST /predictions/nomanifest HTTP/1.1" 200 0 2020-02-15 20:16:21,827 [INFO ] W-9013-nomanifest_1.0 TS_METRICS - Requests2XX.Count:1|#Level:Host|#hostname:ip-172-31-17-115,timestamp:null 2020-02-15 20:16:21,828 [DEBUG] W-9013-nomanifest_1.0 org.pytorch.serve.wlm.Job - Waiting time: 0, Backend time: 1 2020-02-15 20:16:21,829 [DEBUG] epollEventLoopGroup-3-2 org.pytorch.serve.wlm.ModelVersionedRefs - Removed model: nomanifest version: 1.0 2020-02-15 20:16:21,829 [DEBUG] epollEventLoopGroup-3-2 org.pytorch.serve.wlm.WorkerThread - W-9013-nomanifest_1.0 State change WORKER_MODEL_LOADED -> WORKER_SCALED_DOWN 2020-02-15 20:16:21,829 [DEBUG] W-9013-nomanifest_1.0 org.pytorch.serve.wlm.WorkerThread - Shutting down the thread .. Scaling down. 2020-02-15 20:16:21,829 [INFO ] epollEventLoopGroup-4-14 org.pytorch.serve.wlm.WorkerThread - 9013 Worker disconnected. WORKER_SCALED_DOWN 2020-02-15 20:16:21,829 [DEBUG] W-9013-nomanifest_1.0 org.pytorch.serve.wlm.WorkerThread - W-9013-nomanifest_1.0 State change WORKER_SCALED_DOWN -> WORKER_STOPPED 2020-02-15 20:16:21,829 [DEBUG] W-9013-nomanifest_1.0 org.pytorch.serve.wlm.WorkerThread - Worker terminated due to scale-down call. 2020-02-15 20:16:21,830 [INFO ] epollEventLoopGroup-3-2 org.pytorch.serve.wlm.ModelManager - Model nomanifest unregistered. 2020-02-15 20:16:21,830 [INFO ] epollEventLoopGroup-3-2 ACCESS_LOG - 0.0.0.0 "DELETE /models/nomanifest HTTP/1.1" 200 1 2020-02-15 20:16:21,830 [INFO ] epollEventLoopGroup-3-2 TS_METRICS - Requests2XX.Count:1|#Level:Host|#hostname:ip-172-31-17-115,timestamp:null 2020-02-15 20:16:21,832 [INFO ] epollEventLoopGroup-3-2 org.pytorch.serve.archive.ModelArchive - model folder already exists: 29385dfc880480adb4ff8a26d9461d72539fb887 2020-02-15 20:16:21,832 [DEBUG] epollEventLoopGroup-3-2 org.pytorch.serve.wlm.ModelVersionedRefs - Adding new version 1.11 for model noop_default_model_workers 2020-02-15 20:16:21,832 [DEBUG] epollEventLoopGroup-3-2 org.pytorch.serve.wlm.ModelVersionedRefs - Setting default version to 1.11 for model noop_default_model_workers 2020-02-15 20:16:21,832 [INFO ] epollEventLoopGroup-3-2 org.pytorch.serve.wlm.ModelManager - Model noop_default_model_workers loaded. 2020-02-15 20:16:21,832 [DEBUG] epollEventLoopGroup-3-2 org.pytorch.serve.wlm.ModelManager - updateModel: noop_default_model_workers, count: 1 2020-02-15 20:16:21,897 [INFO ] W-9014-noop_default_model_workers_1.11-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Listening on port: /tmp/.ts.sock.9014 2020-02-15 20:16:21,897 [INFO ] W-9014-noop_default_model_workers_1.11-stdout org.pytorch.serve.wlm.WorkerLifeCycle - [PID]5157 2020-02-15 20:16:21,897 [INFO ] W-9014-noop_default_model_workers_1.11-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Torch worker started. 2020-02-15 20:16:21,898 [DEBUG] W-9014-noop_default_model_workers_1.11 org.pytorch.serve.wlm.WorkerThread - W-9014-noop_default_model_workers_1.11 State change null -> WORKER_STARTED 2020-02-15 20:16:21,898 [INFO ] W-9014-noop_default_model_workers_1.11 org.pytorch.serve.wlm.WorkerThread - Connecting to: /tmp/.ts.sock.9014 2020-02-15 20:16:21,898 [INFO ] W-9014-noop_default_model_workers_1.11-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Python runtime: 3.8.1 2020-02-15 20:16:21,898 [INFO ] W-9014-noop_default_model_workers_1.11-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Connection accepted: /tmp/.ts.sock.9014. 2020-02-15 20:16:21,899 [INFO ] W-9014-noop_default_model_workers_1.11 org.pytorch.serve.wlm.WorkerThread - Backend response time: 0 2020-02-15 20:16:21,899 [INFO ] W-9014-noop_default_model_workers_1.11 ACCESS_LOG - 0.0.0.0 "POST /models?url=noop.mar&model_name=noop_default_model_workers&initial_workers=1&synchronous=true HTTP/1.1" 200 68 2020-02-15 20:16:21,899 [INFO ] W-9014-noop_default_model_workers_1.11 TS_METRICS - Requests2XX.Count:1|#Level:Host|#hostname:ip-172-31-17-115,timestamp:null 2020-02-15 20:16:21,899 [DEBUG] W-9014-noop_default_model_workers_1.11 org.pytorch.serve.wlm.WorkerThread - W-9014-noop_default_model_workers_1.11 State change WORKER_STARTED -> WORKER_MODEL_LOADED 2020-02-15 20:16:21,900 [INFO ] W-9014-noop_default_model_workers_1.11 TS_METRICS - W-9014-noop_default_model_workers_1.11.ms:68|#Level:Host|#hostname:ip-172-31-17-115,timestamp:1581797781 2020-02-15 20:16:21,901 [INFO ] epollEventLoopGroup-3-2 ACCESS_LOG - 0.0.0.0 "GET /models/noop_default_model_workers HTTP/1.1" 200 1 2020-02-15 20:16:21,901 [INFO ] epollEventLoopGroup-3-2 TS_METRICS - Requests2XX.Count:1|#Level:Host|#hostname:ip-172-31-17-115,timestamp:null 2020-02-15 20:16:21,902 [DEBUG] epollEventLoopGroup-3-2 org.pytorch.serve.wlm.ModelVersionedRefs - Removed model: noop_default_model_workers version: 1.11 2020-02-15 20:16:21,902 [DEBUG] epollEventLoopGroup-3-2 org.pytorch.serve.wlm.WorkerThread - W-9014-noop_default_model_workers_1.11 State change WORKER_MODEL_LOADED -> WORKER_SCALED_DOWN 2020-02-15 20:16:21,902 [DEBUG] W-9014-noop_default_model_workers_1.11 org.pytorch.serve.wlm.WorkerThread - Shutting down the thread .. Scaling down. 2020-02-15 20:16:21,903 [INFO ] epollEventLoopGroup-4-15 org.pytorch.serve.wlm.WorkerThread - 9014 Worker disconnected. WORKER_SCALED_DOWN 2020-02-15 20:16:21,903 [DEBUG] W-9014-noop_default_model_workers_1.11 org.pytorch.serve.wlm.WorkerThread - W-9014-noop_default_model_workers_1.11 State change WORKER_SCALED_DOWN -> WORKER_STOPPED 2020-02-15 20:16:21,903 [DEBUG] W-9014-noop_default_model_workers_1.11 org.pytorch.serve.wlm.WorkerThread - Worker terminated due to scale-down call. 2020-02-15 20:16:21,904 [INFO ] epollEventLoopGroup-3-2 org.pytorch.serve.wlm.ModelManager - Model noop_default_model_workers unregistered. 2020-02-15 20:16:21,904 [INFO ] epollEventLoopGroup-3-2 ACCESS_LOG - 0.0.0.0 "DELETE /models/noop_default_model_workers HTTP/1.1" 200 2 2020-02-15 20:16:21,904 [INFO ] epollEventLoopGroup-3-2 TS_METRICS - Requests2XX.Count:1|#Level:Host|#hostname:ip-172-31-17-115,timestamp:null 2020-02-15 20:16:21,908 [DEBUG] epollEventLoopGroup-3-3 org.pytorch.serve.wlm.ModelVersionedRefs - Adding new version 1.0 for model memory_error 2020-02-15 20:16:21,908 [DEBUG] epollEventLoopGroup-3-3 org.pytorch.serve.wlm.ModelVersionedRefs - Setting default version to 1.0 for model memory_error 2020-02-15 20:16:21,908 [INFO ] epollEventLoopGroup-3-3 org.pytorch.serve.wlm.ModelManager - Model memory_error loaded. 2020-02-15 20:16:21,908 [DEBUG] epollEventLoopGroup-3-3 org.pytorch.serve.wlm.ModelManager - updateModel: memory_error, count: 1 2020-02-15 20:16:21,973 [INFO ] W-9015-memory_error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Listening on port: /tmp/.ts.sock.9015 2020-02-15 20:16:21,973 [INFO ] W-9015-memory_error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - [PID]5164 2020-02-15 20:16:21,973 [INFO ] W-9015-memory_error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Torch worker started. 2020-02-15 20:16:21,973 [INFO ] W-9015-memory_error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Python runtime: 3.8.1 2020-02-15 20:16:21,973 [DEBUG] W-9015-memory_error_1.0 org.pytorch.serve.wlm.WorkerThread - W-9015-memory_error_1.0 State change null -> WORKER_STARTED 2020-02-15 20:16:21,973 [INFO ] W-9015-memory_error_1.0 org.pytorch.serve.wlm.WorkerThread - Connecting to: /tmp/.ts.sock.9015 2020-02-15 20:16:21,974 [INFO ] W-9015-memory_error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Connection accepted: /tmp/.ts.sock.9015. 2020-02-15 20:16:21,976 [INFO ] W-9015-memory_error_1.0 org.pytorch.serve.wlm.WorkerThread - Backend response time: 1 2020-02-15 20:16:21,977 [DEBUG] W-9015-memory_error_1.0 org.pytorch.serve.wlm.Job - Waiting time: 0, Inference time: 3 2020-02-15 20:16:21,977 [INFO ] W-9015-memory_error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Backend worker process die. 2020-02-15 20:16:21,977 [INFO ] W-9015-memory_error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Traceback (most recent call last): 2020-02-15 20:16:21,977 [INFO ] W-9015-memory_error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/tmp/pip-req-build-88c_hysm/ts/model_service_worker.py", line 163, in 2020-02-15 20:16:21,977 [INFO ] W-9015-memory_error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - worker.run_server() 2020-02-15 20:16:21,977 [INFO ] W-9015-memory_error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/tmp/pip-req-build-88c_hysm/ts/model_service_worker.py", line 141, in run_server 2020-02-15 20:16:21,977 [INFO ] W-9015-memory_error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - self.handle_connection(cl_socket) 2020-02-15 20:16:21,977 [INFO ] W-9015-memory_error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/tmp/pip-req-build-88c_hysm/ts/model_service_worker.py", line 110, in handle_connection 2020-02-15 20:16:21,977 [INFO ] W-9015-memory_error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - raise RuntimeError("{} - {}".format(code, result)) 2020-02-15 20:16:21,977 [INFO ] W-9015-memory_error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - RuntimeError: 507 - System out of memory 2020-02-15 20:16:21,977 [INFO ] epollEventLoopGroup-4-16 org.pytorch.serve.wlm.WorkerThread - 9015 Worker disconnected. WORKER_STARTED 2020-02-15 20:16:21,977 [INFO ] W-9015-memory_error_1.0 ACCESS_LOG - 0.0.0.0 "POST /models?url=loading-memory-error.mar&model_name=memory_error&runtime=python&initial_workers=1&synchronous=true HTTP/1.1" 507 70 2020-02-15 20:16:21,977 [INFO ] W-9015-memory_error_1.0 TS_METRICS - Requests5XX.Count:1|#Level:Host|#hostname:ip-172-31-17-115,timestamp:null 2020-02-15 20:16:21,977 [DEBUG] W-9015-memory_error_1.0 org.pytorch.serve.wlm.ModelVersionedRefs - Removed model: memory_error version: 1.0 2020-02-15 20:16:21,977 [DEBUG] W-9015-memory_error_1.0 org.pytorch.serve.wlm.WorkerThread - W-9015-memory_error_1.0 State change WORKER_STARTED -> WORKER_SCALED_DOWN 2020-02-15 20:16:21,977 [WARN ] W-9015-memory_error_1.0 org.pytorch.serve.wlm.WorkLoadManager - WorkerThread interrupted during waitFor, possible asynch resource cleanup. 2020-02-15 20:16:21,977 [DEBUG] W-9015-memory_error_1.0 org.pytorch.serve.wlm.ModelVersionedRefs - Adding new version 1.0 for model memory_error 2020-02-15 20:16:21,978 [DEBUG] W-9015-memory_error_1.0 org.pytorch.serve.wlm.ModelVersionedRefs - Setting default version to 1.0 for model memory_error 2020-02-15 20:16:21,978 [DEBUG] W-9015-memory_error_1.0 org.pytorch.serve.wlm.WorkerThread - W-9015-memory_error_1.0 State change WORKER_SCALED_DOWN -> WORKER_ERROR 2020-02-15 20:16:21,978 [DEBUG] W-9015-memory_error_1.0 org.pytorch.serve.wlm.WorkerThread - W-9015-memory_error_1.0 State change WORKER_SCALED_DOWN -> WORKER_STOPPED 2020-02-15 20:16:21,978 [DEBUG] W-9015-memory_error_1.0 org.pytorch.serve.wlm.WorkerThread - Worker terminated due to scale-down call. 2020-02-15 20:16:21,982 [DEBUG] epollEventLoopGroup-3-4 org.pytorch.serve.wlm.ModelVersionedRefs - Adding new version 1.0 for model pred-err 2020-02-15 20:16:21,982 [DEBUG] epollEventLoopGroup-3-4 org.pytorch.serve.wlm.ModelVersionedRefs - Setting default version to 1.0 for model pred-err 2020-02-15 20:16:21,982 [INFO ] epollEventLoopGroup-3-4 org.pytorch.serve.wlm.ModelManager - Model pred-err loaded. 2020-02-15 20:16:21,982 [DEBUG] epollEventLoopGroup-3-4 org.pytorch.serve.wlm.ModelManager - updateModel: pred-err, count: 1 2020-02-15 20:16:22,047 [INFO ] W-9016-pred-err_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Listening on port: /tmp/.ts.sock.9016 2020-02-15 20:16:22,047 [INFO ] W-9016-pred-err_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - [PID]5171 2020-02-15 20:16:22,047 [INFO ] W-9016-pred-err_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Torch worker started. 2020-02-15 20:16:22,047 [INFO ] W-9016-pred-err_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Python runtime: 3.8.1 2020-02-15 20:16:22,047 [DEBUG] W-9016-pred-err_1.0 org.pytorch.serve.wlm.WorkerThread - W-9016-pred-err_1.0 State change null -> WORKER_STARTED 2020-02-15 20:16:22,047 [INFO ] W-9016-pred-err_1.0 org.pytorch.serve.wlm.WorkerThread - Connecting to: /tmp/.ts.sock.9016 2020-02-15 20:16:22,048 [INFO ] W-9016-pred-err_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Connection accepted: /tmp/.ts.sock.9016. 2020-02-15 20:16:22,050 [INFO ] W-9016-pred-err_1.0 org.pytorch.serve.wlm.WorkerThread - Backend response time: 2 2020-02-15 20:16:22,050 [INFO ] W-9016-pred-err_1.0 ACCESS_LOG - 0.0.0.0 "POST /models?url=prediction-memory-error.mar&model_name=pred-err&runtime=python&initial_workers=1&synchronous=true HTTP/1.1" 200 69 2020-02-15 20:16:22,050 [INFO ] W-9016-pred-err_1.0 TS_METRICS - Requests2XX.Count:1|#Level:Host|#hostname:ip-172-31-17-115,timestamp:null 2020-02-15 20:16:22,050 [DEBUG] W-9016-pred-err_1.0 org.pytorch.serve.wlm.WorkerThread - W-9016-pred-err_1.0 State change WORKER_STARTED -> WORKER_MODEL_LOADED 2020-02-15 20:16:22,050 [INFO ] W-9016-pred-err_1.0 TS_METRICS - W-9016-pred-err_1.0.ms:68|#Level:Host|#hostname:ip-172-31-17-115,timestamp:1581797782 2020-02-15 20:16:22,074 [INFO ] W-9016-pred-err_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - System out of memory 2020-02-15 20:16:22,074 [INFO ] W-9016-pred-err_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Traceback (most recent call last): 2020-02-15 20:16:22,074 [INFO ] W-9016-pred-err_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/tmp/pip-req-build-88c_hysm/ts/service.py", line 100, in predict 2020-02-15 20:16:22,074 [INFO ] W-9016-pred-err_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - ret = self._entry_point(input_batch, self.context) 2020-02-15 20:16:22,074 [INFO ] W-9016-pred-err_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/tmp/models/e625c5ab6b0f6b74bf6a766e5cad578d6548cab8/service.py", line 9, in handle 2020-02-15 20:16:22,074 [INFO ] W-9016-pred-err_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - raise MemoryError("Some Memory Error") 2020-02-15 20:16:22,074 [INFO ] W-9016-pred-err_1.0 org.pytorch.serve.wlm.WorkerThread - Backend response time: 0 2020-02-15 20:16:22,074 [INFO ] W-9016-pred-err_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - MemoryError: Some Memory Error 2020-02-15 20:16:22,074 [INFO ] W-9016-pred-err_1.0 ACCESS_LOG - /127.0.0.1:37480 "POST /predictions/pred-err HTTP/1.1" 507 1 2020-02-15 20:16:22,074 [INFO ] W-9016-pred-err_1.0 TS_METRICS - Requests5XX.Count:1|#Level:Host|#hostname:ip-172-31-17-115,timestamp:null 2020-02-15 20:16:22,075 [DEBUG] W-9016-pred-err_1.0 org.pytorch.serve.wlm.Job - Waiting time: 0, Inference time: 2 2020-02-15 20:16:22,077 [DEBUG] epollEventLoopGroup-3-6 org.pytorch.serve.wlm.ModelVersionedRefs - Removed model: pred-err version: 1.0 2020-02-15 20:16:22,078 [DEBUG] epollEventLoopGroup-3-6 org.pytorch.serve.wlm.WorkerThread - W-9016-pred-err_1.0 State change WORKER_MODEL_LOADED -> WORKER_SCALED_DOWN 2020-02-15 20:16:22,078 [DEBUG] W-9016-pred-err_1.0 org.pytorch.serve.wlm.WorkerThread - Shutting down the thread .. Scaling down. 2020-02-15 20:16:22,078 [DEBUG] W-9016-pred-err_1.0 org.pytorch.serve.wlm.WorkerThread - W-9016-pred-err_1.0 State change WORKER_SCALED_DOWN -> WORKER_STOPPED 2020-02-15 20:16:22,078 [DEBUG] W-9016-pred-err_1.0 org.pytorch.serve.wlm.WorkerThread - Worker terminated due to scale-down call. 2020-02-15 20:16:22,078 [INFO ] epollEventLoopGroup-4-17 org.pytorch.serve.wlm.WorkerThread - 9016 Worker disconnected. WORKER_SCALED_DOWN 2020-02-15 20:16:22,079 [INFO ] epollEventLoopGroup-3-6 org.pytorch.serve.wlm.ModelManager - Model pred-err unregistered. 2020-02-15 20:16:22,080 [INFO ] epollEventLoopGroup-3-6 ACCESS_LOG - 0.0.0.0 "DELETE /models/pred-err HTTP/1.1" 200 3 2020-02-15 20:16:22,080 [INFO ] epollEventLoopGroup-3-6 TS_METRICS - Requests2XX.Count:1|#Level:Host|#hostname:ip-172-31-17-115,timestamp:null 2020-02-15 20:16:22,137 [INFO ] pool-2-thread-1 TS_METRICS - CPUUtilization.Percent:0.0|#Level:Host|#hostname:ip-172-31-17-115,timestamp:1581797782 2020-02-15 20:16:22,137 [INFO ] pool-2-thread-1 TS_METRICS - DiskAvailable.Gigabytes:38.7763786315918|#Level:Host|#hostname:ip-172-31-17-115,timestamp:1581797782 2020-02-15 20:16:22,137 [INFO ] pool-2-thread-1 TS_METRICS - DiskUsage.Gigabytes:77.47415161132812|#Level:Host|#hostname:ip-172-31-17-115,timestamp:1581797782 2020-02-15 20:16:22,138 [INFO ] pool-2-thread-1 TS_METRICS - DiskUtilization.Percent:66.6|#Level:Host|#hostname:ip-172-31-17-115,timestamp:1581797782 2020-02-15 20:16:22,138 [INFO ] pool-2-thread-1 TS_METRICS - MemoryAvailable.Megabytes:241299.19921875|#Level:Host|#hostname:ip-172-31-17-115,timestamp:1581797782 2020-02-15 20:16:22,138 [INFO ] pool-2-thread-1 TS_METRICS - MemoryUsed.Megabytes:2592.52734375|#Level:Host|#hostname:ip-172-31-17-115,timestamp:1581797782 2020-02-15 20:16:22,138 [INFO ] pool-2-thread-1 TS_METRICS - MemoryUtilization.Percent:1.8|#Level:Host|#hostname:ip-172-31-17-115,timestamp:1581797782 2020-02-15 20:16:22,584 [DEBUG] epollEventLoopGroup-3-7 org.pytorch.serve.wlm.ModelVersionedRefs - Adding new version 1.0 for model err_batch 2020-02-15 20:16:22,585 [DEBUG] epollEventLoopGroup-3-7 org.pytorch.serve.wlm.ModelVersionedRefs - Setting default version to 1.0 for model err_batch 2020-02-15 20:16:22,585 [INFO ] epollEventLoopGroup-3-7 org.pytorch.serve.wlm.ModelManager - Model err_batch loaded. 2020-02-15 20:16:22,585 [DEBUG] epollEventLoopGroup-3-7 org.pytorch.serve.wlm.ModelManager - updateModel: err_batch, count: 1 2020-02-15 20:16:22,649 [INFO ] W-9017-err_batch_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Listening on port: /tmp/.ts.sock.9017 2020-02-15 20:16:22,650 [INFO ] W-9017-err_batch_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - [PID]5186 2020-02-15 20:16:22,650 [INFO ] W-9017-err_batch_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Torch worker started. 2020-02-15 20:16:22,650 [INFO ] W-9017-err_batch_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Python runtime: 3.8.1 2020-02-15 20:16:22,650 [DEBUG] W-9017-err_batch_1.0 org.pytorch.serve.wlm.WorkerThread - W-9017-err_batch_1.0 State change null -> WORKER_STARTED 2020-02-15 20:16:22,650 [INFO ] W-9017-err_batch_1.0 org.pytorch.serve.wlm.WorkerThread - Connecting to: /tmp/.ts.sock.9017 2020-02-15 20:16:22,651 [INFO ] W-9017-err_batch_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Connection accepted: /tmp/.ts.sock.9017. 2020-02-15 20:16:22,654 [INFO ] W-9017-err_batch_1.0 org.pytorch.serve.wlm.WorkerThread - Backend response time: 2 2020-02-15 20:16:22,654 [INFO ] W-9017-err_batch_1.0 ACCESS_LOG - 0.0.0.0 "POST /models?url=error_batch.mar&model_name=err_batch&initial_workers=1&synchronous=true HTTP/1.1" 200 70 2020-02-15 20:16:22,654 [INFO ] W-9017-err_batch_1.0 TS_METRICS - Requests2XX.Count:1|#Level:Host|#hostname:ip-172-31-17-115,timestamp:null 2020-02-15 20:16:22,655 [DEBUG] W-9017-err_batch_1.0 org.pytorch.serve.wlm.WorkerThread - W-9017-err_batch_1.0 State change WORKER_STARTED -> WORKER_MODEL_LOADED 2020-02-15 20:16:22,655 [INFO ] W-9017-err_batch_1.0 TS_METRICS - W-9017-err_batch_1.0.ms:70|#Level:Host|#hostname:ip-172-31-17-115,timestamp:1581797782 2020-02-15 20:16:22,676 [INFO ] W-9017-err_batch_1.0 org.pytorch.serve.wlm.WorkerThread - Backend response time: 0 2020-02-15 20:16:22,676 [INFO ] W-9017-err_batch_1.0-stdout MODEL_METRICS - PredictionTime.Milliseconds:0.01|#ModelName:err_batch,Level:Model|#hostname:ip-172-31-17-115,requestID:0373ec36-7ae5-4528-92b2-3e6b49a88641,timestamp:1581797782 2020-02-15 20:16:22,676 [INFO ] W-9017-err_batch_1.0 ACCESS_LOG - /127.0.0.1:37482 "POST /predictions/err_batch HTTP/1.1" 507 1 2020-02-15 20:16:22,676 [INFO ] W-9017-err_batch_1.0 TS_METRICS - Requests5XX.Count:1|#Level:Host|#hostname:ip-172-31-17-115,timestamp:null 2020-02-15 20:16:22,676 [DEBUG] W-9017-err_batch_1.0 org.pytorch.serve.wlm.Job - Waiting time: 0, Backend time: 1 2020-02-15 20:16:22,677 [ERROR] epollEventLoopGroup-14-1 org.pytorch.serve.ModelServerTest$TestHandler - Unknown exception io.netty.channel.unix.Errors$NativeIoException: syscall:read(..) failed: Connection reset by peer at io.netty.channel.unix.FileDescriptor.readAddress(..)(Unknown Source) 2020-02-15 20:16:22,696 [INFO ] epollEventLoopGroup-3-9 ACCESS_LOG - /127.0.0.1:37484 "GET / HTTP/1.1" 405 0 2020-02-15 20:16:22,697 [INFO ] epollEventLoopGroup-3-9 TS_METRICS - Requests4XX.Count:1|#Level:Host|#hostname:ip-172-31-17-115,timestamp:null 2020-02-15 20:16:22,715 [INFO ] epollEventLoopGroup-3-10 ACCESS_LOG - /127.0.0.1:37486 "GET /InvalidUrl HTTP/1.1" 404 0 2020-02-15 20:16:22,715 [INFO ] epollEventLoopGroup-3-10 TS_METRICS - Requests4XX.Count:1|#Level:Host|#hostname:ip-172-31-17-115,timestamp:null 2020-02-15 20:16:22,732 [INFO ] epollEventLoopGroup-3-11 ACCESS_LOG - /127.0.0.1:37488 "GET /predictions HTTP/1.1" 404 1 2020-02-15 20:16:22,732 [INFO ] epollEventLoopGroup-3-11 TS_METRICS - Requests4XX.Count:1|#Level:Host|#hostname:ip-172-31-17-115,timestamp:null 2020-02-15 20:16:22,750 [INFO ] epollEventLoopGroup-3-12 ACCESS_LOG - /127.0.0.1:37490 "OPTIONS /predictions/InvalidModel HTTP/1.1" 404 0 2020-02-15 20:16:22,750 [INFO ] epollEventLoopGroup-3-12 TS_METRICS - Requests4XX.Count:1|#Level:Host|#hostname:ip-172-31-17-115,timestamp:null 2020-02-15 20:16:22,767 [INFO ] epollEventLoopGroup-3-13 ACCESS_LOG - /127.0.0.1:37492 "GET /predictions/InvalidModel HTTP/1.1" 404 0 2020-02-15 20:16:22,767 [INFO ] epollEventLoopGroup-3-13 TS_METRICS - Requests4XX.Count:1|#Level:Host|#hostname:ip-172-31-17-115,timestamp:null 2020-02-15 20:16:22,772 [INFO ] epollEventLoopGroup-3-14 ACCESS_LOG - 0.0.0.0 "GET /InvalidUrl HTTP/1.1" 404 1 2020-02-15 20:16:22,772 [INFO ] epollEventLoopGroup-3-14 TS_METRICS - Requests4XX.Count:1|#Level:Host|#hostname:ip-172-31-17-115,timestamp:null 2020-02-15 20:16:22,775 [INFO ] epollEventLoopGroup-3-15 ACCESS_LOG - 0.0.0.0 "PUT /models HTTP/1.1" 405 0 2020-02-15 20:16:22,776 [INFO ] epollEventLoopGroup-3-15 TS_METRICS - Requests4XX.Count:1|#Level:Host|#hostname:ip-172-31-17-115,timestamp:null 2020-02-15 20:16:22,780 [INFO ] epollEventLoopGroup-3-16 ACCESS_LOG - 0.0.0.0 "POST /models/noop HTTP/1.1" 405 0 2020-02-15 20:16:22,780 [INFO ] epollEventLoopGroup-3-16 TS_METRICS - Requests4XX.Count:1|#Level:Host|#hostname:ip-172-31-17-115,timestamp:null 2020-02-15 20:16:22,803 [INFO ] epollEventLoopGroup-3-17 ACCESS_LOG - 0.0.0.0 "GET /models/InvalidModel HTTP/1.1" 404 0 2020-02-15 20:16:22,803 [INFO ] epollEventLoopGroup-3-17 TS_METRICS - Requests4XX.Count:1|#Level:Host|#hostname:ip-172-31-17-115,timestamp:null 2020-02-15 20:16:22,807 [INFO ] epollEventLoopGroup-3-18 ACCESS_LOG - 0.0.0.0 "POST /models HTTP/1.1" 400 0 2020-02-15 20:16:22,807 [INFO ] epollEventLoopGroup-3-18 TS_METRICS - Requests4XX.Count:1|#Level:Host|#hostname:ip-172-31-17-115,timestamp:null 2020-02-15 20:16:22,811 [INFO ] epollEventLoopGroup-3-19 ACCESS_LOG - 0.0.0.0 "POST /models?url=InvalidUrl&runtime=InvalidRuntime HTTP/1.1" 400 0 2020-02-15 20:16:22,811 [INFO ] epollEventLoopGroup-3-19 TS_METRICS - Requests4XX.Count:1|#Level:Host|#hostname:ip-172-31-17-115,timestamp:null 2020-02-15 20:16:22,815 [INFO ] epollEventLoopGroup-3-20 ACCESS_LOG - 0.0.0.0 "POST /models?url=InvalidUrl HTTP/1.1" 404 1 2020-02-15 20:16:22,815 [INFO ] epollEventLoopGroup-3-20 TS_METRICS - Requests4XX.Count:1|#Level:Host|#hostname:ip-172-31-17-115,timestamp:null 2020-02-15 20:16:22,820 [DEBUG] epollEventLoopGroup-3-21 org.pytorch.serve.wlm.ModelVersionedRefs - Adding new version 1.11 for model noop_v1.0 2020-02-15 20:16:22,820 [INFO ] epollEventLoopGroup-3-21 ACCESS_LOG - 0.0.0.0 "POST /models?url=noop.mar&model_name=noop_v1.0&runtime=python&synchronous=false HTTP/1.1" 409 1 2020-02-15 20:16:22,820 [INFO ] epollEventLoopGroup-3-21 TS_METRICS - Requests4XX.Count:1|#Level:Host|#hostname:ip-172-31-17-115,timestamp:null 2020-02-15 20:16:22,824 [INFO ] epollEventLoopGroup-3-22 ACCESS_LOG - 0.0.0.0 "POST /models?url=http%3A%2F%2Flocalhost%3Aaaaa HTTP/1.1" 404 0 2020-02-15 20:16:22,824 [INFO ] epollEventLoopGroup-3-22 TS_METRICS - Requests4XX.Count:1|#Level:Host|#hostname:ip-172-31-17-115,timestamp:null 2020-02-15 20:16:22,836 [INFO ] epollEventLoopGroup-3-23 ACCESS_LOG - 0.0.0.0 "POST /models?url=http%3A%2F%2Flocalhost%3A18888%2Ffake.mar&synchronous=false HTTP/1.1" 400 9 2020-02-15 20:16:22,836 [INFO ] epollEventLoopGroup-3-23 TS_METRICS - Requests4XX.Count:1|#Level:Host|#hostname:ip-172-31-17-115,timestamp:null 2020-02-15 20:16:22,867 [INFO ] epollEventLoopGroup-3-25 ACCESS_LOG - /127.0.0.1:37496 "GET /fake.mar HTTP/1.1" 404 0 2020-02-15 20:16:22,868 [INFO ] epollEventLoopGroup-3-25 TS_METRICS - Requests4XX.Count:1|#Level:Host|#hostname:ip-172-31-17-115,timestamp:null 2020-02-15 20:16:22,869 [INFO ] epollEventLoopGroup-3-24 ACCESS_LOG - 0.0.0.0 "POST /models?url=https%3A%2F%2Flocalhost%3A8443%2Ffake.mar&synchronous=false HTTP/1.1" 400 30 2020-02-15 20:16:22,869 [INFO ] epollEventLoopGroup-3-24 TS_METRICS - Requests4XX.Count:1|#Level:Host|#hostname:ip-172-31-17-115,timestamp:null 2020-02-15 20:16:22,874 [INFO ] epollEventLoopGroup-3-26 ACCESS_LOG - 0.0.0.0 "POST /models?url=..%2Ffake.mar&synchronous=false HTTP/1.1" 404 0 2020-02-15 20:16:22,874 [INFO ] epollEventLoopGroup-3-26 TS_METRICS - Requests4XX.Count:1|#Level:Host|#hostname:ip-172-31-17-115,timestamp:null 2020-02-15 20:16:22,878 [INFO ] epollEventLoopGroup-3-27 ACCESS_LOG - 0.0.0.0 "PUT /models/fake HTTP/1.1" 404 1 2020-02-15 20:16:22,878 [INFO ] epollEventLoopGroup-3-27 TS_METRICS - Requests4XX.Count:1|#Level:Host|#hostname:ip-172-31-17-115,timestamp:null 2020-02-15 20:16:22,882 [DEBUG] epollEventLoopGroup-3-28 org.pytorch.serve.wlm.ModelVersionedRefs - Adding new version 1.0 for model init-error 2020-02-15 20:16:22,883 [DEBUG] epollEventLoopGroup-3-28 org.pytorch.serve.wlm.ModelVersionedRefs - Setting default version to 1.0 for model init-error 2020-02-15 20:16:22,883 [INFO ] epollEventLoopGroup-3-28 org.pytorch.serve.wlm.ModelManager - Model init-error loaded. 2020-02-15 20:16:22,883 [INFO ] epollEventLoopGroup-3-28 ACCESS_LOG - 0.0.0.0 "POST /models?url=init-error.mar&model_name=init-error&synchronous=false HTTP/1.1" 200 2 2020-02-15 20:16:22,883 [INFO ] epollEventLoopGroup-3-28 TS_METRICS - Requests2XX.Count:1|#Level:Host|#hostname:ip-172-31-17-115,timestamp:null 2020-02-15 20:16:22,884 [DEBUG] epollEventLoopGroup-3-28 org.pytorch.serve.wlm.ModelManager - updateModel: init-error, count: 1 2020-02-15 20:16:22,950 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Listening on port: /tmp/.ts.sock.9018 2020-02-15 20:16:22,951 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - [PID]5234 2020-02-15 20:16:22,951 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Torch worker started. 2020-02-15 20:16:22,951 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Python runtime: 3.8.1 2020-02-15 20:16:22,951 [DEBUG] W-9018-init-error_1.0 org.pytorch.serve.wlm.WorkerThread - W-9018-init-error_1.0 State change null -> WORKER_STARTED 2020-02-15 20:16:22,951 [INFO ] W-9018-init-error_1.0 org.pytorch.serve.wlm.WorkerThread - Connecting to: /tmp/.ts.sock.9018 2020-02-15 20:16:22,952 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Connection accepted: /tmp/.ts.sock.9018. 2020-02-15 20:16:22,954 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Backend worker process die. 2020-02-15 20:16:22,954 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Traceback (most recent call last): 2020-02-15 20:16:22,954 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/tmp/pip-req-build-88c_hysm/ts/model_service_worker.py", line 163, in 2020-02-15 20:16:22,954 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - worker.run_server() 2020-02-15 20:16:22,954 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/tmp/pip-req-build-88c_hysm/ts/model_service_worker.py", line 141, in run_server 2020-02-15 20:16:22,954 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - self.handle_connection(cl_socket) 2020-02-15 20:16:22,954 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/tmp/pip-req-build-88c_hysm/ts/model_service_worker.py", line 105, in handle_connection 2020-02-15 20:16:22,954 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - service, result, code = self.load_model(msg) 2020-02-15 20:16:22,954 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/tmp/pip-req-build-88c_hysm/ts/model_service_worker.py", line 83, in load_model 2020-02-15 20:16:22,955 [INFO ] epollEventLoopGroup-4-19 org.pytorch.serve.wlm.WorkerThread - 9018 Worker disconnected. WORKER_STARTED 2020-02-15 20:16:22,955 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - service = model_loader.load(model_name, model_dir, handler, gpu, batch_size) 2020-02-15 20:16:22,955 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/tmp/pip-req-build-88c_hysm/ts/model_loader.py", line 107, in load 2020-02-15 20:16:22,955 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - entry_point(None, service.context) 2020-02-15 20:16:22,955 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/tmp/models/070632063adb276c032a224604952e4e910272ba/invalid_service.py", line 7, in handle 2020-02-15 20:16:22,955 [DEBUG] W-9018-init-error_1.0 org.pytorch.serve.wlm.WorkerThread - Backend worker monitoring thread interrupted or backend worker process died. java.lang.InterruptedException at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2014) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2088) at java.util.concurrent.ArrayBlockingQueue.poll(ArrayBlockingQueue.java:418) at org.pytorch.serve.wlm.WorkerThread.run(WorkerThread.java:128) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) 2020-02-15 20:16:22,955 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - raise RuntimeError("Initialize failure.") 2020-02-15 20:16:22,955 [WARN ] W-9018-init-error_1.0 org.pytorch.serve.wlm.BatchAggregator - Load model failed: init-error, error: Worker died. 2020-02-15 20:16:22,956 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - RuntimeError: Initialize failure. 2020-02-15 20:16:22,956 [INFO ] W-9018-init-error_1.0 ACCESS_LOG - 0.0.0.0 "PUT /models/init-error?synchronous=true&min_worker=1 HTTP/1.1" 500 72 2020-02-15 20:16:22,956 [INFO ] W-9018-init-error_1.0 TS_METRICS - Requests5XX.Count:1|#Level:Host|#hostname:ip-172-31-17-115,timestamp:null 2020-02-15 20:16:22,956 [DEBUG] W-9018-init-error_1.0 org.pytorch.serve.wlm.WorkerThread - W-9018-init-error_1.0 State change WORKER_STARTED -> WORKER_STOPPED 2020-02-15 20:16:22,959 [INFO ] W-9018-init-error_1.0 org.pytorch.serve.wlm.WorkerThread - Retry worker: 9018 in 1 seconds. 2020-02-15 20:16:22,961 [WARN ] epollEventLoopGroup-3-29 org.pytorch.serve.wlm.ModelManager - Model not found: fake 2020-02-15 20:16:22,961 [INFO ] epollEventLoopGroup-3-29 ACCESS_LOG - 0.0.0.0 "DELETE /models/fake HTTP/1.1" 404 0 2020-02-15 20:16:22,961 [INFO ] epollEventLoopGroup-3-29 TS_METRICS - Requests4XX.Count:1|#Level:Host|#hostname:ip-172-31-17-115,timestamp:null 2020-02-15 20:16:22,964 [DEBUG] epollEventLoopGroup-3-30 org.pytorch.serve.wlm.ModelVersionedRefs - Removed model: noop_v1.0 version: 1.11 2020-02-15 20:16:22,964 [DEBUG] epollEventLoopGroup-3-30 org.pytorch.serve.wlm.WorkerThread - W-9005-noop_v1.0_1.11 State change WORKER_MODEL_LOADED -> WORKER_SCALED_DOWN 2020-02-15 20:16:22,964 [WARN ] epollEventLoopGroup-3-30 org.pytorch.serve.wlm.WorkLoadManager - WorkerThread timed out while cleaning, please resend request. 2020-02-15 20:16:22,964 [DEBUG] W-9005-noop_v1.0_1.11 org.pytorch.serve.wlm.WorkerThread - Shutting down the thread .. Scaling down. 2020-02-15 20:16:22,964 [INFO ] epollEventLoopGroup-4-6 org.pytorch.serve.wlm.WorkerThread - 9005 Worker disconnected. WORKER_SCALED_DOWN 2020-02-15 20:16:22,964 [DEBUG] W-9005-noop_v1.0_1.11 org.pytorch.serve.wlm.WorkerThread - W-9005-noop_v1.0_1.11 State change WORKER_SCALED_DOWN -> WORKER_STOPPED 2020-02-15 20:16:22,964 [DEBUG] epollEventLoopGroup-3-30 org.pytorch.serve.wlm.ModelVersionedRefs - Adding new version 1.11 for model noop_v1.0 2020-02-15 20:16:22,964 [DEBUG] W-9005-noop_v1.0_1.11 org.pytorch.serve.wlm.WorkerThread - Worker terminated due to scale-down call. 2020-02-15 20:16:22,964 [DEBUG] epollEventLoopGroup-3-30 org.pytorch.serve.wlm.ModelVersionedRefs - Setting default version to 1.11 for model noop_v1.0 2020-02-15 20:16:22,964 [INFO ] epollEventLoopGroup-3-30 ACCESS_LOG - 0.0.0.0 "DELETE /models/noop_v1.0 HTTP/1.1" 408 1 2020-02-15 20:16:22,964 [INFO ] epollEventLoopGroup-3-30 TS_METRICS - Requests4XX.Count:1|#Level:Host|#hostname:ip-172-31-17-115,timestamp:null 2020-02-15 20:16:22,966 [DEBUG] epollEventLoopGroup-3-31 org.pytorch.serve.wlm.ModelVersionedRefs - Removed model: noop_v1.0 version: 1.11 2020-02-15 20:16:22,967 [INFO ] epollEventLoopGroup-3-31 org.pytorch.serve.wlm.ModelManager - Model noop_v1.0 unregistered. 2020-02-15 20:16:22,967 [INFO ] epollEventLoopGroup-3-31 ACCESS_LOG - 0.0.0.0 "DELETE /models/noop_v1.0 HTTP/1.1" 200 1 2020-02-15 20:16:22,967 [INFO ] epollEventLoopGroup-3-31 TS_METRICS - Requests2XX.Count:1|#Level:Host|#hostname:ip-172-31-17-115,timestamp:null 2020-02-15 20:16:24,024 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Listening on port: /tmp/.ts.sock.9018 2020-02-15 20:16:24,025 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - [PID]5246 2020-02-15 20:16:24,025 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Torch worker started. 2020-02-15 20:16:24,025 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Python runtime: 3.8.1 2020-02-15 20:16:24,025 [DEBUG] W-9018-init-error_1.0 org.pytorch.serve.wlm.WorkerThread - W-9018-init-error_1.0 State change WORKER_STOPPED -> WORKER_STARTED 2020-02-15 20:16:24,025 [INFO ] W-9018-init-error_1.0 org.pytorch.serve.wlm.WorkerThread - Connecting to: /tmp/.ts.sock.9018 2020-02-15 20:16:24,025 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Connection accepted: /tmp/.ts.sock.9018. 2020-02-15 20:16:24,026 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Backend worker process die. 2020-02-15 20:16:24,026 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Traceback (most recent call last): 2020-02-15 20:16:24,026 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/tmp/pip-req-build-88c_hysm/ts/model_service_worker.py", line 163, in 2020-02-15 20:16:24,027 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - worker.run_server() 2020-02-15 20:16:24,027 [INFO ] epollEventLoopGroup-4-20 org.pytorch.serve.wlm.WorkerThread - 9018 Worker disconnected. WORKER_STARTED 2020-02-15 20:16:24,027 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/tmp/pip-req-build-88c_hysm/ts/model_service_worker.py", line 141, in run_server 2020-02-15 20:16:24,027 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - self.handle_connection(cl_socket) 2020-02-15 20:16:24,027 [DEBUG] W-9018-init-error_1.0 org.pytorch.serve.wlm.WorkerThread - Backend worker monitoring thread interrupted or backend worker process died. java.lang.InterruptedException at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2014) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2088) at java.util.concurrent.ArrayBlockingQueue.poll(ArrayBlockingQueue.java:418) at org.pytorch.serve.wlm.WorkerThread.run(WorkerThread.java:128) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) 2020-02-15 20:16:24,027 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/tmp/pip-req-build-88c_hysm/ts/model_service_worker.py", line 105, in handle_connection 2020-02-15 20:16:24,027 [WARN ] W-9018-init-error_1.0 org.pytorch.serve.wlm.BatchAggregator - Load model failed: init-error, error: Worker died. 2020-02-15 20:16:24,027 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - service, result, code = self.load_model(msg) 2020-02-15 20:16:24,027 [DEBUG] W-9018-init-error_1.0 org.pytorch.serve.wlm.WorkerThread - W-9018-init-error_1.0 State change WORKER_STARTED -> WORKER_STOPPED 2020-02-15 20:16:24,028 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/tmp/pip-req-build-88c_hysm/ts/model_service_worker.py", line 83, in load_model 2020-02-15 20:16:24,028 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - service = model_loader.load(model_name, model_dir, handler, gpu, batch_size) 2020-02-15 20:16:24,028 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/tmp/pip-req-build-88c_hysm/ts/model_loader.py", line 107, in load 2020-02-15 20:16:24,028 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - entry_point(None, service.context) 2020-02-15 20:16:24,028 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/tmp/models/070632063adb276c032a224604952e4e910272ba/invalid_service.py", line 7, in handle 2020-02-15 20:16:24,028 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - raise RuntimeError("Initialize failure.") 2020-02-15 20:16:24,028 [INFO ] W-9018-init-error_1.0 org.pytorch.serve.wlm.WorkerThread - Retry worker: 9018 in 1 seconds. 2020-02-15 20:16:24,028 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - RuntimeError: Initialize failure. 2020-02-15 20:16:25,093 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Listening on port: /tmp/.ts.sock.9018 2020-02-15 20:16:25,093 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - [PID]5251 2020-02-15 20:16:25,093 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Torch worker started. 2020-02-15 20:16:25,093 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Python runtime: 3.8.1 2020-02-15 20:16:25,093 [DEBUG] W-9018-init-error_1.0 org.pytorch.serve.wlm.WorkerThread - W-9018-init-error_1.0 State change WORKER_STOPPED -> WORKER_STARTED 2020-02-15 20:16:25,093 [INFO ] W-9018-init-error_1.0 org.pytorch.serve.wlm.WorkerThread - Connecting to: /tmp/.ts.sock.9018 2020-02-15 20:16:25,094 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Connection accepted: /tmp/.ts.sock.9018. 2020-02-15 20:16:25,095 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Backend worker process die. 2020-02-15 20:16:25,095 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Traceback (most recent call last): 2020-02-15 20:16:25,095 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/tmp/pip-req-build-88c_hysm/ts/model_service_worker.py", line 163, in 2020-02-15 20:16:25,095 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - worker.run_server() 2020-02-15 20:16:25,095 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/tmp/pip-req-build-88c_hysm/ts/model_service_worker.py", line 141, in run_server 2020-02-15 20:16:25,095 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - self.handle_connection(cl_socket) 2020-02-15 20:16:25,095 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/tmp/pip-req-build-88c_hysm/ts/model_service_worker.py", line 105, in handle_connection 2020-02-15 20:16:25,095 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - service, result, code = self.load_model(msg) 2020-02-15 20:16:25,095 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/tmp/pip-req-build-88c_hysm/ts/model_service_worker.py", line 83, in load_model 2020-02-15 20:16:25,095 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - service = model_loader.load(model_name, model_dir, handler, gpu, batch_size) 2020-02-15 20:16:25,095 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/tmp/pip-req-build-88c_hysm/ts/model_loader.py", line 107, in load 2020-02-15 20:16:25,095 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - entry_point(None, service.context) 2020-02-15 20:16:25,095 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/tmp/models/070632063adb276c032a224604952e4e910272ba/invalid_service.py", line 7, in handle 2020-02-15 20:16:25,095 [INFO ] epollEventLoopGroup-4-21 org.pytorch.serve.wlm.WorkerThread - 9018 Worker disconnected. WORKER_STARTED 2020-02-15 20:16:25,096 [DEBUG] W-9018-init-error_1.0 org.pytorch.serve.wlm.WorkerThread - Backend worker monitoring thread interrupted or backend worker process died. java.lang.InterruptedException at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2014) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2088) at java.util.concurrent.ArrayBlockingQueue.poll(ArrayBlockingQueue.java:418) at org.pytorch.serve.wlm.WorkerThread.run(WorkerThread.java:128) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) 2020-02-15 20:16:25,096 [WARN ] W-9018-init-error_1.0 org.pytorch.serve.wlm.BatchAggregator - Load model failed: init-error, error: Worker died. 2020-02-15 20:16:25,097 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - raise RuntimeError("Initialize failure.") 2020-02-15 20:16:25,097 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - RuntimeError: Initialize failure. 2020-02-15 20:16:25,097 [DEBUG] W-9018-init-error_1.0 org.pytorch.serve.wlm.WorkerThread - W-9018-init-error_1.0 State change WORKER_STARTED -> WORKER_STOPPED 2020-02-15 20:16:25,097 [INFO ] W-9018-init-error_1.0 org.pytorch.serve.wlm.WorkerThread - Retry worker: 9018 in 2 seconds. 2020-02-15 20:16:27,163 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Listening on port: /tmp/.ts.sock.9018 2020-02-15 20:16:27,163 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - [PID]5256 2020-02-15 20:16:27,163 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Torch worker started. 2020-02-15 20:16:27,163 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Python runtime: 3.8.1 2020-02-15 20:16:27,163 [DEBUG] W-9018-init-error_1.0 org.pytorch.serve.wlm.WorkerThread - W-9018-init-error_1.0 State change WORKER_STOPPED -> WORKER_STARTED 2020-02-15 20:16:27,164 [INFO ] W-9018-init-error_1.0 org.pytorch.serve.wlm.WorkerThread - Connecting to: /tmp/.ts.sock.9018 2020-02-15 20:16:27,164 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Connection accepted: /tmp/.ts.sock.9018. 2020-02-15 20:16:27,165 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Backend worker process die. 2020-02-15 20:16:27,165 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Traceback (most recent call last): 2020-02-15 20:16:27,165 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/tmp/pip-req-build-88c_hysm/ts/model_service_worker.py", line 163, in 2020-02-15 20:16:27,165 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - worker.run_server() 2020-02-15 20:16:27,165 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/tmp/pip-req-build-88c_hysm/ts/model_service_worker.py", line 141, in run_server 2020-02-15 20:16:27,165 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - self.handle_connection(cl_socket) 2020-02-15 20:16:27,165 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/tmp/pip-req-build-88c_hysm/ts/model_service_worker.py", line 105, in handle_connection 2020-02-15 20:16:27,165 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - service, result, code = self.load_model(msg) 2020-02-15 20:16:27,165 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/tmp/pip-req-build-88c_hysm/ts/model_service_worker.py", line 83, in load_model 2020-02-15 20:16:27,165 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - service = model_loader.load(model_name, model_dir, handler, gpu, batch_size) 2020-02-15 20:16:27,166 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/tmp/pip-req-build-88c_hysm/ts/model_loader.py", line 107, in load 2020-02-15 20:16:27,166 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - entry_point(None, service.context) 2020-02-15 20:16:27,166 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/tmp/models/070632063adb276c032a224604952e4e910272ba/invalid_service.py", line 7, in handle 2020-02-15 20:16:27,166 [INFO ] epollEventLoopGroup-4-22 org.pytorch.serve.wlm.WorkerThread - 9018 Worker disconnected. WORKER_STARTED 2020-02-15 20:16:27,166 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - raise RuntimeError("Initialize failure.") 2020-02-15 20:16:27,166 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - RuntimeError: Initialize failure. 2020-02-15 20:16:27,166 [DEBUG] W-9018-init-error_1.0 org.pytorch.serve.wlm.WorkerThread - Backend worker monitoring thread interrupted or backend worker process died. java.lang.InterruptedException at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2014) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2088) at java.util.concurrent.ArrayBlockingQueue.poll(ArrayBlockingQueue.java:418) at org.pytorch.serve.wlm.WorkerThread.run(WorkerThread.java:128) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) 2020-02-15 20:16:27,166 [WARN ] W-9018-init-error_1.0 org.pytorch.serve.wlm.BatchAggregator - Load model failed: init-error, error: Worker died. 2020-02-15 20:16:27,166 [DEBUG] W-9018-init-error_1.0 org.pytorch.serve.wlm.WorkerThread - W-9018-init-error_1.0 State change WORKER_STARTED -> WORKER_STOPPED 2020-02-15 20:16:27,167 [INFO ] W-9018-init-error_1.0 org.pytorch.serve.wlm.WorkerThread - Retry worker: 9018 in 3 seconds. 2020-02-15 20:16:30,234 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Listening on port: /tmp/.ts.sock.9018 2020-02-15 20:16:30,234 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - [PID]5261 2020-02-15 20:16:30,235 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Torch worker started. 2020-02-15 20:16:30,235 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Python runtime: 3.8.1 2020-02-15 20:16:30,235 [DEBUG] W-9018-init-error_1.0 org.pytorch.serve.wlm.WorkerThread - W-9018-init-error_1.0 State change WORKER_STOPPED -> WORKER_STARTED 2020-02-15 20:16:30,235 [INFO ] W-9018-init-error_1.0 org.pytorch.serve.wlm.WorkerThread - Connecting to: /tmp/.ts.sock.9018 2020-02-15 20:16:30,235 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Connection accepted: /tmp/.ts.sock.9018. 2020-02-15 20:16:30,237 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Backend worker process die. 2020-02-15 20:16:30,237 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Traceback (most recent call last): 2020-02-15 20:16:30,237 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/tmp/pip-req-build-88c_hysm/ts/model_service_worker.py", line 163, in 2020-02-15 20:16:30,237 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - worker.run_server() 2020-02-15 20:16:30,237 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/tmp/pip-req-build-88c_hysm/ts/model_service_worker.py", line 141, in run_server 2020-02-15 20:16:30,237 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - self.handle_connection(cl_socket) 2020-02-15 20:16:30,237 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/tmp/pip-req-build-88c_hysm/ts/model_service_worker.py", line 105, in handle_connection 2020-02-15 20:16:30,237 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - service, result, code = self.load_model(msg) 2020-02-15 20:16:30,237 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/tmp/pip-req-build-88c_hysm/ts/model_service_worker.py", line 83, in load_model 2020-02-15 20:16:30,237 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - service = model_loader.load(model_name, model_dir, handler, gpu, batch_size) 2020-02-15 20:16:30,237 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/tmp/pip-req-build-88c_hysm/ts/model_loader.py", line 107, in load 2020-02-15 20:16:30,237 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - entry_point(None, service.context) 2020-02-15 20:16:30,237 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/tmp/models/070632063adb276c032a224604952e4e910272ba/invalid_service.py", line 7, in handle 2020-02-15 20:16:30,237 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - raise RuntimeError("Initialize failure.") 2020-02-15 20:16:30,237 [INFO ] epollEventLoopGroup-4-23 org.pytorch.serve.wlm.WorkerThread - 9018 Worker disconnected. WORKER_STARTED 2020-02-15 20:16:30,237 [DEBUG] W-9018-init-error_1.0 org.pytorch.serve.wlm.WorkerThread - Backend worker monitoring thread interrupted or backend worker process died. java.lang.InterruptedException at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2014) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2088) at java.util.concurrent.ArrayBlockingQueue.poll(ArrayBlockingQueue.java:418) at org.pytorch.serve.wlm.WorkerThread.run(WorkerThread.java:128) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) 2020-02-15 20:16:30,237 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - RuntimeError: Initialize failure. 2020-02-15 20:16:30,238 [WARN ] W-9018-init-error_1.0 org.pytorch.serve.wlm.BatchAggregator - Load model failed: init-error, error: Worker died. 2020-02-15 20:16:30,238 [DEBUG] W-9018-init-error_1.0 org.pytorch.serve.wlm.WorkerThread - W-9018-init-error_1.0 State change WORKER_STARTED -> WORKER_STOPPED 2020-02-15 20:16:30,238 [INFO ] W-9018-init-error_1.0 org.pytorch.serve.wlm.WorkerThread - Retry worker: 9018 in 5 seconds. 2020-02-15 20:16:35,306 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Listening on port: /tmp/.ts.sock.9018 2020-02-15 20:16:35,306 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - [PID]5266 2020-02-15 20:16:35,306 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Torch worker started. 2020-02-15 20:16:35,306 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Python runtime: 3.8.1 2020-02-15 20:16:35,306 [DEBUG] W-9018-init-error_1.0 org.pytorch.serve.wlm.WorkerThread - W-9018-init-error_1.0 State change WORKER_STOPPED -> WORKER_STARTED 2020-02-15 20:16:35,306 [INFO ] W-9018-init-error_1.0 org.pytorch.serve.wlm.WorkerThread - Connecting to: /tmp/.ts.sock.9018 2020-02-15 20:16:35,307 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Connection accepted: /tmp/.ts.sock.9018. 2020-02-15 20:16:35,308 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Backend worker process die. 2020-02-15 20:16:35,308 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Traceback (most recent call last): 2020-02-15 20:16:35,308 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/tmp/pip-req-build-88c_hysm/ts/model_service_worker.py", line 163, in 2020-02-15 20:16:35,308 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - worker.run_server() 2020-02-15 20:16:35,308 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/tmp/pip-req-build-88c_hysm/ts/model_service_worker.py", line 141, in run_server 2020-02-15 20:16:35,308 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - self.handle_connection(cl_socket) 2020-02-15 20:16:35,308 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/tmp/pip-req-build-88c_hysm/ts/model_service_worker.py", line 105, in handle_connection 2020-02-15 20:16:35,308 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - service, result, code = self.load_model(msg) 2020-02-15 20:16:35,308 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/tmp/pip-req-build-88c_hysm/ts/model_service_worker.py", line 83, in load_model 2020-02-15 20:16:35,308 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - service = model_loader.load(model_name, model_dir, handler, gpu, batch_size) 2020-02-15 20:16:35,308 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/tmp/pip-req-build-88c_hysm/ts/model_loader.py", line 107, in load 2020-02-15 20:16:35,308 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - entry_point(None, service.context) 2020-02-15 20:16:35,308 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/tmp/models/070632063adb276c032a224604952e4e910272ba/invalid_service.py", line 7, in handle 2020-02-15 20:16:35,308 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - raise RuntimeError("Initialize failure.") 2020-02-15 20:16:35,308 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - RuntimeError: Initialize failure. 2020-02-15 20:16:35,308 [INFO ] epollEventLoopGroup-4-24 org.pytorch.serve.wlm.WorkerThread - 9018 Worker disconnected. WORKER_STARTED 2020-02-15 20:16:35,309 [DEBUG] W-9018-init-error_1.0 org.pytorch.serve.wlm.WorkerThread - Backend worker monitoring thread interrupted or backend worker process died. java.lang.InterruptedException at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2014) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2088) at java.util.concurrent.ArrayBlockingQueue.poll(ArrayBlockingQueue.java:418) at org.pytorch.serve.wlm.WorkerThread.run(WorkerThread.java:128) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) 2020-02-15 20:16:35,309 [WARN ] W-9018-init-error_1.0 org.pytorch.serve.wlm.BatchAggregator - Load model failed: init-error, error: Worker died. 2020-02-15 20:16:35,309 [DEBUG] W-9018-init-error_1.0 org.pytorch.serve.wlm.WorkerThread - W-9018-init-error_1.0 State change WORKER_STARTED -> WORKER_STOPPED 2020-02-15 20:16:35,309 [INFO ] W-9018-init-error_1.0 org.pytorch.serve.wlm.WorkerThread - Retry worker: 9018 in 8 seconds. 2020-02-15 20:16:43,376 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Listening on port: /tmp/.ts.sock.9018 2020-02-15 20:16:43,377 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - [PID]5271 2020-02-15 20:16:43,377 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Torch worker started. 2020-02-15 20:16:43,377 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Python runtime: 3.8.1 2020-02-15 20:16:43,377 [DEBUG] W-9018-init-error_1.0 org.pytorch.serve.wlm.WorkerThread - W-9018-init-error_1.0 State change WORKER_STOPPED -> WORKER_STARTED 2020-02-15 20:16:43,377 [INFO ] W-9018-init-error_1.0 org.pytorch.serve.wlm.WorkerThread - Connecting to: /tmp/.ts.sock.9018 2020-02-15 20:16:43,378 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Connection accepted: /tmp/.ts.sock.9018. 2020-02-15 20:16:43,379 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Backend worker process die. 2020-02-15 20:16:43,379 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Traceback (most recent call last): 2020-02-15 20:16:43,379 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/tmp/pip-req-build-88c_hysm/ts/model_service_worker.py", line 163, in 2020-02-15 20:16:43,379 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - worker.run_server() 2020-02-15 20:16:43,379 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/tmp/pip-req-build-88c_hysm/ts/model_service_worker.py", line 141, in run_server 2020-02-15 20:16:43,379 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - self.handle_connection(cl_socket) 2020-02-15 20:16:43,379 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/tmp/pip-req-build-88c_hysm/ts/model_service_worker.py", line 105, in handle_connection 2020-02-15 20:16:43,379 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - service, result, code = self.load_model(msg) 2020-02-15 20:16:43,379 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/tmp/pip-req-build-88c_hysm/ts/model_service_worker.py", line 83, in load_model 2020-02-15 20:16:43,379 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - service = model_loader.load(model_name, model_dir, handler, gpu, batch_size) 2020-02-15 20:16:43,379 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/tmp/pip-req-build-88c_hysm/ts/model_loader.py", line 107, in load 2020-02-15 20:16:43,379 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - entry_point(None, service.context) 2020-02-15 20:16:43,379 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/tmp/models/070632063adb276c032a224604952e4e910272ba/invalid_service.py", line 7, in handle 2020-02-15 20:16:43,379 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - raise RuntimeError("Initialize failure.") 2020-02-15 20:16:43,379 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - RuntimeError: Initialize failure. 2020-02-15 20:16:43,379 [INFO ] epollEventLoopGroup-4-25 org.pytorch.serve.wlm.WorkerThread - 9018 Worker disconnected. WORKER_STARTED 2020-02-15 20:16:43,379 [DEBUG] W-9018-init-error_1.0 org.pytorch.serve.wlm.WorkerThread - Backend worker monitoring thread interrupted or backend worker process died. java.lang.InterruptedException at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2014) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2088) at java.util.concurrent.ArrayBlockingQueue.poll(ArrayBlockingQueue.java:418) at org.pytorch.serve.wlm.WorkerThread.run(WorkerThread.java:128) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) 2020-02-15 20:16:43,380 [WARN ] W-9018-init-error_1.0 org.pytorch.serve.wlm.BatchAggregator - Load model failed: init-error, error: Worker died. 2020-02-15 20:16:43,380 [DEBUG] W-9018-init-error_1.0 org.pytorch.serve.wlm.WorkerThread - W-9018-init-error_1.0 State change WORKER_STARTED -> WORKER_STOPPED 2020-02-15 20:16:43,380 [INFO ] W-9018-init-error_1.0 org.pytorch.serve.wlm.WorkerThread - Retry worker: 9018 in 13 seconds. 2020-02-15 20:16:52,082 [ERROR] epollEventLoopGroup-12-1 org.pytorch.serve.ModelServerTest$TestHandler - Unknown exception io.netty.handler.timeout.ReadTimeoutException 2020-02-15 20:16:52,968 [ERROR] epollEventLoopGroup-36-1 org.pytorch.serve.ModelServerTest$TestHandler - Unknown exception io.netty.handler.timeout.ReadTimeoutException 2020-02-15 20:16:52,971 [WARN ] epollEventLoopGroup-3-32 org.pytorch.serve.wlm.ModelManager - Cannot remove default version 1.11 for model noopversioned 2020-02-15 20:16:52,971 [ERROR] epollEventLoopGroup-3-32 org.pytorch.serve.http.HttpRequestHandler - org.pytorch.serve.http.InternalServerException: Cannot remove default version for model noopversioned at org.pytorch.serve.http.ManagementRequestHandler.handleUnregisterModel(ManagementRequestHandler.java:272) at org.pytorch.serve.http.ManagementRequestHandler.handleRequest(ManagementRequestHandler.java:84) at org.pytorch.serve.http.ApiDescriptionRequestHandler.handleRequest(ApiDescriptionRequestHandler.java:37) at org.pytorch.serve.http.HttpRequestHandler.channelRead0(HttpRequestHandler.java:42) at org.pytorch.serve.http.HttpRequestHandler.channelRead0(HttpRequestHandler.java:19) at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) at io.netty.channel.CombinedChannelDuplexHandler$DelegatingChannelHandlerContext.fireChannelRead(CombinedChannelDuplexHandler.java:438) at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:310) at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:284) at io.netty.channel.CombinedChannelDuplexHandler.channelRead(CombinedChannelDuplexHandler.java:253) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1434) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:965) at io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.epollInReady(AbstractEpollStreamChannel.java:808) at io.netty.channel.epoll.EpollDomainSocketChannel$EpollDomainUnsafe.epollInReady(EpollDomainSocketChannel.java:138) at io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:404) at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:304) at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:884) at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:748) 2020-02-15 20:16:52,972 [INFO ] epollEventLoopGroup-3-32 ACCESS_LOG - 0.0.0.0 "DELETE /models/noopversioned/1.11 HTTP/1.1" 500 1 2020-02-15 20:16:52,972 [INFO ] epollEventLoopGroup-3-32 TS_METRICS - Requests5XX.Count:1|#Level:Host|#hostname:ip-172-31-17-115,timestamp:null 2020-02-15 20:16:52,974 [DEBUG] epollEventLoopGroup-3-33 org.pytorch.serve.wlm.ModelVersionedRefs - Removed model: noopversioned version: 1.21 2020-02-15 20:16:52,975 [DEBUG] epollEventLoopGroup-3-33 org.pytorch.serve.wlm.WorkerThread - W-9008-noopversioned_1.21 State change WORKER_MODEL_LOADED -> WORKER_SCALED_DOWN 2020-02-15 20:16:52,975 [DEBUG] W-9008-noopversioned_1.21 org.pytorch.serve.wlm.WorkerThread - Shutting down the thread .. Scaling down. 2020-02-15 20:16:52,975 [INFO ] epollEventLoopGroup-4-9 org.pytorch.serve.wlm.WorkerThread - 9008 Worker disconnected. WORKER_SCALED_DOWN 2020-02-15 20:16:52,975 [DEBUG] W-9008-noopversioned_1.21 org.pytorch.serve.wlm.WorkerThread - W-9008-noopversioned_1.21 State change WORKER_SCALED_DOWN -> WORKER_STOPPED 2020-02-15 20:16:52,975 [DEBUG] W-9008-noopversioned_1.21 org.pytorch.serve.wlm.WorkerThread - Worker terminated due to scale-down call. 2020-02-15 20:16:52,977 [INFO ] epollEventLoopGroup-3-33 org.pytorch.serve.wlm.ModelManager - Model noopversioned unregistered. 2020-02-15 20:16:52,977 [INFO ] epollEventLoopGroup-3-33 ACCESS_LOG - 0.0.0.0 "DELETE /models/noopversioned/1.21 HTTP/1.1" 200 3 2020-02-15 20:16:52,977 [INFO ] epollEventLoopGroup-3-33 TS_METRICS - Requests2XX.Count:1|#Level:Host|#hostname:ip-172-31-17-115,timestamp:null 2020-02-15 20:16:52,978 [DEBUG] epollEventLoopGroup-3-33 org.pytorch.serve.wlm.ModelVersionedRefs - Removed model: noopversioned version: 1.11 2020-02-15 20:16:52,978 [DEBUG] epollEventLoopGroup-3-33 org.pytorch.serve.wlm.WorkerThread - W-9007-noopversioned_1.11 State change WORKER_MODEL_LOADED -> WORKER_SCALED_DOWN 2020-02-15 20:16:52,978 [DEBUG] W-9007-noopversioned_1.11 org.pytorch.serve.wlm.WorkerThread - Shutting down the thread .. Scaling down. 2020-02-15 20:16:52,978 [INFO ] epollEventLoopGroup-4-8 org.pytorch.serve.wlm.WorkerThread - 9007 Worker disconnected. WORKER_SCALED_DOWN 2020-02-15 20:16:52,978 [DEBUG] W-9007-noopversioned_1.11 org.pytorch.serve.wlm.WorkerThread - W-9007-noopversioned_1.11 State change WORKER_SCALED_DOWN -> WORKER_STOPPED 2020-02-15 20:16:52,978 [DEBUG] W-9007-noopversioned_1.11 org.pytorch.serve.wlm.WorkerThread - Worker terminated due to scale-down call. 2020-02-15 20:16:52,979 [INFO ] epollEventLoopGroup-3-33 org.pytorch.serve.wlm.ModelManager - Model noopversioned unregistered. 2020-02-15 20:16:52,979 [INFO ] epollEventLoopGroup-3-33 ACCESS_LOG - 0.0.0.0 "DELETE /models/noopversioned/1.11 HTTP/1.1" 200 1 2020-02-15 20:16:52,979 [INFO ] epollEventLoopGroup-3-33 TS_METRICS - Requests2XX.Count:1|#Level:Host|#hostname:ip-172-31-17-115,timestamp:null Gradle suite > Gradle test > org.pytorch.serve.ModelServerTest.test PASSED Gradle suite > Gradle test > org.pytorch.serve.ModelServerTest.testTS STANDARD_OUT 2020-02-15 20:16:53,071 [DEBUG] epollEventLoopGroup-3-35 org.pytorch.serve.wlm.ModelVersionedRefs - Adding new version 1.0 for model mnist 2020-02-15 20:16:53,071 [DEBUG] epollEventLoopGroup-3-35 org.pytorch.serve.wlm.ModelVersionedRefs - Setting default version to 1.0 for model mnist 2020-02-15 20:16:53,071 [INFO ] epollEventLoopGroup-3-35 org.pytorch.serve.wlm.ModelManager - Model mnist loaded. 2020-02-15 20:16:53,071 [DEBUG] epollEventLoopGroup-3-35 org.pytorch.serve.wlm.ModelManager - updateModel: mnist, count: 1 2020-02-15 20:16:53,138 [INFO ] W-9019-mnist_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Listening on port: /tmp/.ts.sock.9019 2020-02-15 20:16:53,138 [INFO ] W-9019-mnist_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - [PID]5285 2020-02-15 20:16:53,138 [INFO ] W-9019-mnist_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Torch worker started. 2020-02-15 20:16:53,138 [INFO ] W-9019-mnist_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Python runtime: 3.8.1 2020-02-15 20:16:53,138 [DEBUG] W-9019-mnist_1.0 org.pytorch.serve.wlm.WorkerThread - W-9019-mnist_1.0 State change null -> WORKER_STARTED 2020-02-15 20:16:53,138 [INFO ] W-9019-mnist_1.0 org.pytorch.serve.wlm.WorkerThread - Connecting to: /tmp/.ts.sock.9019 2020-02-15 20:16:53,139 [INFO ] W-9019-mnist_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Connection accepted: /tmp/.ts.sock.9019. 2020-02-15 20:16:56,447 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Listening on port: /tmp/.ts.sock.9018 2020-02-15 20:16:56,447 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - [PID]5290 2020-02-15 20:16:56,447 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Torch worker started. 2020-02-15 20:16:56,447 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Python runtime: 3.8.1 2020-02-15 20:16:56,447 [DEBUG] W-9018-init-error_1.0 org.pytorch.serve.wlm.WorkerThread - W-9018-init-error_1.0 State change WORKER_STOPPED -> WORKER_STARTED 2020-02-15 20:16:56,447 [INFO ] W-9018-init-error_1.0 org.pytorch.serve.wlm.WorkerThread - Connecting to: /tmp/.ts.sock.9018 2020-02-15 20:16:56,448 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Connection accepted: /tmp/.ts.sock.9018. 2020-02-15 20:16:56,450 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Backend worker process die. 2020-02-15 20:16:56,450 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Traceback (most recent call last): 2020-02-15 20:16:56,450 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/tmp/pip-req-build-88c_hysm/ts/model_service_worker.py", line 163, in 2020-02-15 20:16:56,450 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - worker.run_server() 2020-02-15 20:16:56,450 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/tmp/pip-req-build-88c_hysm/ts/model_service_worker.py", line 141, in run_server 2020-02-15 20:16:56,450 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - self.handle_connection(cl_socket) 2020-02-15 20:16:56,450 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/tmp/pip-req-build-88c_hysm/ts/model_service_worker.py", line 105, in handle_connection 2020-02-15 20:16:56,450 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - service, result, code = self.load_model(msg) 2020-02-15 20:16:56,450 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/tmp/pip-req-build-88c_hysm/ts/model_service_worker.py", line 83, in load_model 2020-02-15 20:16:56,450 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - service = model_loader.load(model_name, model_dir, handler, gpu, batch_size) 2020-02-15 20:16:56,450 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/tmp/pip-req-build-88c_hysm/ts/model_loader.py", line 107, in load 2020-02-15 20:16:56,450 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - entry_point(None, service.context) 2020-02-15 20:16:56,450 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/tmp/models/070632063adb276c032a224604952e4e910272ba/invalid_service.py", line 7, in handle 2020-02-15 20:16:56,450 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - raise RuntimeError("Initialize failure.") 2020-02-15 20:16:56,450 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - RuntimeError: Initialize failure. 2020-02-15 20:16:56,450 [INFO ] epollEventLoopGroup-4-27 org.pytorch.serve.wlm.WorkerThread - 9018 Worker disconnected. WORKER_STARTED 2020-02-15 20:16:56,450 [DEBUG] W-9018-init-error_1.0 org.pytorch.serve.wlm.WorkerThread - Backend worker monitoring thread interrupted or backend worker process died. java.lang.InterruptedException at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2014) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2088) at java.util.concurrent.ArrayBlockingQueue.poll(ArrayBlockingQueue.java:418) at org.pytorch.serve.wlm.WorkerThread.run(WorkerThread.java:128) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) 2020-02-15 20:16:56,451 [WARN ] W-9018-init-error_1.0 org.pytorch.serve.wlm.BatchAggregator - Load model failed: init-error, error: Worker died. 2020-02-15 20:16:56,451 [DEBUG] W-9018-init-error_1.0 org.pytorch.serve.wlm.WorkerThread - W-9018-init-error_1.0 State change WORKER_STARTED -> WORKER_STOPPED 2020-02-15 20:16:56,451 [INFO ] W-9018-init-error_1.0 org.pytorch.serve.wlm.WorkerThread - Retry worker: 9018 in 21 seconds. 2020-02-15 20:17:04,801 [INFO ] W-9019-mnist_1.0 org.pytorch.serve.wlm.WorkerThread - Backend response time: 11662 2020-02-15 20:17:04,801 [INFO ] W-9019-mnist_1.0 ACCESS_LOG - 0.0.0.0 "POST /models?url=mnist.mar&model_name=mnist&runtime=python&initial_workers=1&synchronous=true HTTP/1.1" 200 11818 2020-02-15 20:17:04,801 [INFO ] W-9019-mnist_1.0 TS_METRICS - Requests2XX.Count:1|#Level:Host|#hostname:ip-172-31-17-115,timestamp:null 2020-02-15 20:17:04,801 [DEBUG] W-9019-mnist_1.0 org.pytorch.serve.wlm.WorkerThread - W-9019-mnist_1.0 State change WORKER_STARTED -> WORKER_MODEL_LOADED 2020-02-15 20:17:04,802 [INFO ] W-9019-mnist_1.0 TS_METRICS - W-9019-mnist_1.0.ms:11730|#Level:Host|#hostname:ip-172-31-17-115,timestamp:1581797824 2020-02-15 20:17:04,839 [INFO ] W-9019-mnist_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Invoking custom service failed. 2020-02-15 20:17:04,839 [INFO ] W-9019-mnist_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Traceback (most recent call last): 2020-02-15 20:17:04,840 [INFO ] W-9019-mnist_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/tmp/pip-req-build-88c_hysm/ts/service.py", line 100, in predict 2020-02-15 20:17:04,840 [INFO ] W-9019-mnist_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - ret = self._entry_point(input_batch, self.context) 2020-02-15 20:17:04,840 [INFO ] W-9019-mnist_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/tmp/models/235a35d7b420d6b7954e88cce0772caf1f76942f/mnist_handler.py", line 92, in handle 2020-02-15 20:17:04,840 [INFO ] W-9019-mnist_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - data = _service.inference(data) 2020-02-15 20:17:04,840 [INFO ] W-9019-mnist_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/tmp/models/235a35d7b420d6b7954e88cce0772caf1f76942f/mnist_handler.py", line 71, in inference 2020-02-15 20:17:04,840 [INFO ] W-9019-mnist_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - outputs = self.model.forward(inputs) 2020-02-15 20:17:04,840 [INFO ] W-9019-mnist_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/tmp/models/235a35d7b420d6b7954e88cce0772caf1f76942f/mnist.py", line 17, in forward 2020-02-15 20:17:04,840 [INFO ] W-9019-mnist_1.0 org.pytorch.serve.wlm.WorkerThread - Backend response time: 37 2020-02-15 20:17:04,840 [INFO ] W-9019-mnist_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - x = self.conv1(x) 2020-02-15 20:17:04,840 [INFO ] W-9019-mnist_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/home/ubuntu/anaconda3/envs/serve/lib/python3.8/site-packages/torch/nn/modules/module.py", line 532, in __call__ 2020-02-15 20:17:04,840 [INFO ] W-9019-mnist_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - result = self.forward(*input, **kwargs) 2020-02-15 20:17:04,840 [INFO ] W-9019-mnist_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/home/ubuntu/anaconda3/envs/serve/lib/python3.8/site-packages/torch/nn/modules/conv.py", line 345, in forward 2020-02-15 20:17:04,840 [INFO ] W-9019-mnist_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - return self.conv2d_forward(input, self.weight) 2020-02-15 20:17:04,840 [INFO ] W-9019-mnist_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/home/ubuntu/anaconda3/envs/serve/lib/python3.8/site-packages/torch/nn/modules/conv.py", line 341, in conv2d_forward 2020-02-15 20:17:04,840 [INFO ] W-9019-mnist_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - return F.conv2d(input, weight, self.bias, self.stride, 2020-02-15 20:17:04,840 [INFO ] W-9019-mnist_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - RuntimeError: Input type (torch.cuda.FloatTensor) and weight type (torch.FloatTensor) should be the same 2020-02-15 20:17:04,840 [INFO ] W-9019-mnist_1.0 ACCESS_LOG - /127.0.0.1:37498 "POST /predictions/mnist HTTP/1.1" 503 37 2020-02-15 20:17:04,840 [INFO ] W-9019-mnist_1.0 TS_METRICS - Requests5XX.Count:1|#Level:Host|#hostname:ip-172-31-17-115,timestamp:null 2020-02-15 20:17:04,840 [DEBUG] W-9019-mnist_1.0 org.pytorch.serve.wlm.Job - Waiting time: 0, Inference time: 37 2020-02-15 20:17:04,841 [ERROR] epollEventLoopGroup-39-1 org.pytorch.serve.ModelServerTest$TestHandler - Unknown exception io.netty.channel.unix.Errors$NativeIoException: syscall:read(..) failed: Connection reset by peer at io.netty.channel.unix.FileDescriptor.readAddress(..)(Unknown Source) Gradle suite > Gradle test > org.pytorch.serve.ModelServerTest.testTS FAILED java.lang.AssertionError at ModelServerTest.java:508 Gradle suite STANDARD_OUT 2020-02-15 20:17:04,845 [INFO ] epollEventLoopGroup-2-1 org.pytorch.serve.ModelServer - Inference model server stopped. 2020-02-15 20:17:04,845 [INFO ] epollEventLoopGroup-2-2 org.pytorch.serve.ModelServer - Management model server stopped. 5 tests completed, 1 failed > Task :server:test FAILED > Task :server:jacocoTestReport FAILURE: Build failed with an exception. * What went wrong: Execution failed for task ':server:test'. > There were failing tests. See the report at: file:///tmp/pip-req-build-88c_hysm/frontend/server/build/reports/tests/test/index.html * Try: Run with --stacktrace option to get the stack trace. Run with --info or --debug option to get more log output. Run with --scan to get full insights. * Get more help at https://help.gradle.org Deprecated Gradle features were used in this build, making it incompatible with Gradle 5.0. Use '--warning-mode all' to show the individual deprecation warnings. See https://docs.gradle.org/4.9/userguide/command_line_interface.html#sec:command_line_warnings BUILD FAILED in 1m 53s 30 actionable tasks: 27 executed, 3 up-to-date Traceback (most recent call last): File "", line 1, in File "/tmp/pip-req-build-88c_hysm/setup.py", line 137, in setup( File "/home/ubuntu/anaconda3/envs/serve/lib/python3.8/site-packages/setuptools/__init__.py", line 144, in setup return distutils.core.setup(**attrs) File "/home/ubuntu/anaconda3/envs/serve/lib/python3.8/distutils/core.py", line 148, in setup dist.run_commands() File "/home/ubuntu/anaconda3/envs/serve/lib/python3.8/distutils/dist.py", line 966, in run_commands self.run_command(cmd) File "/home/ubuntu/anaconda3/envs/serve/lib/python3.8/distutils/dist.py", line 985, in run_command cmd_obj.run() File "/home/ubuntu/anaconda3/envs/serve/lib/python3.8/site-packages/wheel/bdist_wheel.py", line 223, in run self.run_command('build') File "/home/ubuntu/anaconda3/envs/serve/lib/python3.8/distutils/cmd.py", line 313, in run_command self.distribution.run_command(command) File "/home/ubuntu/anaconda3/envs/serve/lib/python3.8/distutils/dist.py", line 985, in run_command cmd_obj.run() File "/home/ubuntu/anaconda3/envs/serve/lib/python3.8/distutils/command/build.py", line 135, in run self.run_command(cmd_name) File "/home/ubuntu/anaconda3/envs/serve/lib/python3.8/distutils/cmd.py", line 313, in run_command self.distribution.run_command(command) File "/home/ubuntu/anaconda3/envs/serve/lib/python3.8/distutils/dist.py", line 985, in run_command cmd_obj.run() File "/tmp/pip-req-build-88c_hysm/setup.py", line 98, in run self.run_command('build_frontend') File "/home/ubuntu/anaconda3/envs/serve/lib/python3.8/distutils/cmd.py", line 313, in run_command self.distribution.run_command(command) File "/home/ubuntu/anaconda3/envs/serve/lib/python3.8/distutils/dist.py", line 985, in run_command cmd_obj.run() File "/tmp/pip-req-build-88c_hysm/setup.py", line 85, in run subprocess.check_call('frontend/gradlew -p frontend clean build', shell=True) File "/home/ubuntu/anaconda3/envs/serve/lib/python3.8/subprocess.py", line 364, in check_call raise CalledProcessError(retcode, cmd) subprocess.CalledProcessError: Command 'frontend/gradlew -p frontend clean build' returned non-zero exit status 1. ---------------------------------------- ERROR: Failed building wheel for torchserve Running setup.py clean for torchserve Building wheel for future (setup.py) ... done Created wheel for future: filename=future-0.18.2-py3-none-any.whl size=491058 sha256=fefed60dd80d36a34cdd35357ed6ffa10fddae73fba9d7159fbe3560987ed974 Stored in directory: /home/ubuntu/.cache/pip/wheels/8e/70/28/3d6ccd6e315f65f245da085482a2e1c7d14b90b30f239e2cf4 Successfully built future Failed to build torchserve Installing collected packages: future, torchserve, sentencepiece Running setup.py install for torchserve ... error ERROR: Command errored out with exit status 1: command: /home/ubuntu/anaconda3/envs/serve/bin/python -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-req-build-88c_hysm/setup.py'"'"'; __file__='"'"'/tmp/pip-req-build-88c_hysm/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record /tmp/pip-record-wtp_gdlz/install-record.txt --single-version-externally-managed --compile --install-headers /home/ubuntu/anaconda3/envs/serve/include/python3.8/torchserve cwd: /tmp/pip-req-build-88c_hysm/ Complete output (1185 lines): running install running build running build_py running build_frontend > Task :cts:clean > Task :modelarchive:clean > Task :server:killServer No server running! > Task :server:clean > Task :cts:compileJava NO-SOURCE > Task :cts:processResources NO-SOURCE > Task :cts:classes UP-TO-DATE > Task :cts:jar > Task :cts:assemble > Task :cts:checkstyleMain NO-SOURCE > Task :cts:compileTestJava NO-SOURCE > Task :cts:processTestResources NO-SOURCE > Task :cts:testClasses UP-TO-DATE > Task :cts:checkstyleTest NO-SOURCE > Task :cts:findbugsMain NO-SOURCE > Task :cts:findbugsTest NO-SOURCE > Task :cts:test NO-SOURCE > Task :cts:jacocoTestCoverageVerification SKIPPED > Task :cts:jacocoTestReport SKIPPED > Task :cts:pmdMain NO-SOURCE > Task :cts:pmdTest SKIPPED > Task :cts:verifyJava > Task :cts:check > Task :cts:build > Task :modelarchive:compileJava > Task :modelarchive:processResources NO-SOURCE > Task :modelarchive:classes > Task :modelarchive:jar > Task :modelarchive:assemble > Task :modelarchive:checkstyleMain > Task :modelarchive:compileTestJava > Task :modelarchive:processTestResources > Task :modelarchive:testClasses > Task :modelarchive:checkstyleTest > Task :modelarchive:findbugsMain > Task :modelarchive:findbugsTest > Task :modelarchive:test Gradle suite > Gradle test > org.pytorch.serve.archive.CoverageTest.test PASSED Gradle suite > Gradle test > org.pytorch.serve.archive.ModelArchiveTest.test PASSED Gradle suite > Gradle test > org.pytorch.serve.archive.ModelArchiveTest.testInvalidURL PASSED Gradle suite > Gradle test > org.pytorch.serve.archive.ModelArchiveTest.testMalformURL PASSED > Task :modelarchive:jacocoTestCoverageVerification > Task :modelarchive:jacocoTestReport > Task :modelarchive:pmdMain > Task :modelarchive:pmdTest SKIPPED > Task :modelarchive:verifyJava > Task :modelarchive:check > Task :modelarchive:build > Task :server:compileJava > Task :server:processResources > Task :server:classes > Task :server:jar > Task :server:assemble > Task :server:checkstyleMain > Task :server:compileTestJava > Task :server:processTestResources > Task :server:testClasses > Task :server:checkstyleTest > Task :server:findbugsMain > Task :server:findbugsTest > Task :server:test Gradle suite STANDARD_OUT 2020-02-15 20:17:36,205 [INFO ] Test worker org.pytorch.serve.ModelServer - TS Home: /tmp/pip-req-build-88c_hysm Current directory: /tmp/pip-req-build-88c_hysm/frontend/server Temp directory: /tmp Number of GPUs: 4 Number of CPUs: 32 Max heap size: 27305 M Python executable: python Config file: src/test/resources/config.properties Inference address: https://127.0.0.1:8443 Management address: unix:/tmp/management.sock Model Store: /tmp/pip-req-build-88c_hysm/frontend/modelarchive/src/test/resources/models Initial Models: noop.mar Log dir: /tmp/pip-req-build-88c_hysm/frontend/server/build/logs Metrics dir: /tmp/pip-req-build-88c_hysm/frontend/server/build/logs Netty threads: 0 Netty client threads: 0 Default workers per model: 4 Blacklist Regex: N/A Maximum Response Size: 6553500 Maximum Request Size: 10485760 2020-02-15 20:17:36,215 [INFO ] Test worker org.pytorch.serve.ModelServer - Loading initial models: noop.mar 2020-02-15 20:17:36,314 [DEBUG] Test worker org.pytorch.serve.wlm.ModelVersionedRefs - Adding new version 1.11 for model noop 2020-02-15 20:17:36,315 [DEBUG] Test worker org.pytorch.serve.wlm.ModelVersionedRefs - Setting default version to 1.11 for model noop 2020-02-15 20:17:36,315 [INFO ] Test worker org.pytorch.serve.wlm.ModelManager - Model noop loaded. 2020-02-15 20:17:36,315 [DEBUG] Test worker org.pytorch.serve.wlm.ModelManager - updateModel: noop, count: 4 2020-02-15 20:17:36,337 [INFO ] Test worker org.pytorch.serve.ModelServer - Initialize Inference server with: EpollServerSocketChannel. 2020-02-15 20:17:36,413 [INFO ] W-9002-noop_1.11-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Listening on port: /tmp/.ts.sock.9002 2020-02-15 20:17:36,413 [INFO ] W-9002-noop_1.11-stdout org.pytorch.serve.wlm.WorkerLifeCycle - [PID]5866 2020-02-15 20:17:36,413 [INFO ] W-9002-noop_1.11-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Torch worker started. 2020-02-15 20:17:36,414 [INFO ] W-9002-noop_1.11-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Python runtime: 3.8.1 2020-02-15 20:17:36,414 [DEBUG] W-9002-noop_1.11 org.pytorch.serve.wlm.WorkerThread - W-9002-noop_1.11 State change null -> WORKER_STARTED 2020-02-15 20:17:36,415 [INFO ] W-9000-noop_1.11-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Listening on port: /tmp/.ts.sock.9000 2020-02-15 20:17:36,415 [INFO ] W-9000-noop_1.11-stdout org.pytorch.serve.wlm.WorkerLifeCycle - [PID]5868 2020-02-15 20:17:36,415 [INFO ] W-9000-noop_1.11-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Torch worker started. 2020-02-15 20:17:36,415 [DEBUG] W-9000-noop_1.11 org.pytorch.serve.wlm.WorkerThread - W-9000-noop_1.11 State change null -> WORKER_STARTED 2020-02-15 20:17:36,416 [INFO ] W-9000-noop_1.11-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Python runtime: 3.8.1 2020-02-15 20:17:36,417 [INFO ] W-9003-noop_1.11-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Listening on port: /tmp/.ts.sock.9003 2020-02-15 20:17:36,417 [INFO ] W-9003-noop_1.11-stdout org.pytorch.serve.wlm.WorkerLifeCycle - [PID]5869 2020-02-15 20:17:36,417 [INFO ] W-9003-noop_1.11-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Torch worker started. 2020-02-15 20:17:36,417 [DEBUG] W-9003-noop_1.11 org.pytorch.serve.wlm.WorkerThread - W-9003-noop_1.11 State change null -> WORKER_STARTED 2020-02-15 20:17:36,417 [INFO ] W-9003-noop_1.11-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Python runtime: 3.8.1 2020-02-15 20:17:36,420 [INFO ] W-9001-noop_1.11-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Listening on port: /tmp/.ts.sock.9001 2020-02-15 20:17:36,420 [INFO ] W-9001-noop_1.11-stdout org.pytorch.serve.wlm.WorkerLifeCycle - [PID]5867 2020-02-15 20:17:36,420 [INFO ] W-9001-noop_1.11-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Torch worker started. 2020-02-15 20:17:36,420 [DEBUG] W-9001-noop_1.11 org.pytorch.serve.wlm.WorkerThread - W-9001-noop_1.11 State change null -> WORKER_STARTED 2020-02-15 20:17:36,420 [INFO ] W-9001-noop_1.11-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Python runtime: 3.8.1 2020-02-15 20:17:36,423 [INFO ] W-9002-noop_1.11 org.pytorch.serve.wlm.WorkerThread - Connecting to: /tmp/.ts.sock.9002 2020-02-15 20:17:36,423 [INFO ] W-9003-noop_1.11 org.pytorch.serve.wlm.WorkerThread - Connecting to: /tmp/.ts.sock.9003 2020-02-15 20:17:36,423 [INFO ] W-9000-noop_1.11 org.pytorch.serve.wlm.WorkerThread - Connecting to: /tmp/.ts.sock.9000 2020-02-15 20:17:36,423 [INFO ] W-9001-noop_1.11 org.pytorch.serve.wlm.WorkerThread - Connecting to: /tmp/.ts.sock.9001 2020-02-15 20:17:36,552 [INFO ] Test worker org.pytorch.serve.ModelServer - Inference API bind to: https://127.0.0.1:8443 2020-02-15 20:17:36,553 [INFO ] Test worker org.pytorch.serve.ModelServer - Initialize Management server with: EpollServerDomainSocketChannel. 2020-02-15 20:17:36,554 [INFO ] Test worker org.pytorch.serve.ModelServer - Management API bind to: unix:/tmp/management.sock Gradle suite > Gradle test STANDARD_OUT 2020-02-15 20:17:36,556 [INFO ] W-9003-noop_1.11-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Connection accepted: /tmp/.ts.sock.9003. 2020-02-15 20:17:36,556 [INFO ] W-9002-noop_1.11-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Connection accepted: /tmp/.ts.sock.9002. 2020-02-15 20:17:36,556 [INFO ] W-9001-noop_1.11-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Connection accepted: /tmp/.ts.sock.9001. 2020-02-15 20:17:36,556 [INFO ] W-9000-noop_1.11-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Connection accepted: /tmp/.ts.sock.9000. Gradle suite > Gradle test > org.pytorch.serve.util.ConfigManagerTest.test STANDARD_OUT 2020-02-15 20:17:36,600 [DEBUG] Test worker TS_METRICS - [TestMetric1.Milliseconds:null|#Level:Model|#hostname:null,requestID:12345,timestamp:1542157988, TestMetric2.Milliseconds:null|#Level:Model|#hostname:null,requestID:23478,timestamp:1542157988] 2020-02-15 20:17:36,600 [DEBUG] Test worker MODEL_METRICS - [TestMetric1.Milliseconds:null|#Level:Model|#hostname:null,requestID:12345,timestamp:1542157988, TestMetric2.Milliseconds:null|#Level:Model|#hostname:null,requestID:23478,timestamp:1542157988] 2020-02-15 20:17:36,601 [DEBUG] Test worker MODEL_LOG - test model_log 2020-02-15 20:17:36,619 [INFO ] W-9003-noop_1.11 org.pytorch.serve.wlm.WorkerThread - Backend response time: 21 2020-02-15 20:17:36,619 [INFO ] W-9000-noop_1.11 org.pytorch.serve.wlm.WorkerThread - Backend response time: 21 2020-02-15 20:17:36,619 [INFO ] W-9001-noop_1.11 org.pytorch.serve.wlm.WorkerThread - Backend response time: 21 2020-02-15 20:17:36,619 [INFO ] W-9002-noop_1.11 org.pytorch.serve.wlm.WorkerThread - Backend response time: 21 2020-02-15 20:17:36,619 [DEBUG] W-9003-noop_1.11 org.pytorch.serve.wlm.WorkerThread - W-9003-noop_1.11 State change WORKER_STARTED -> WORKER_MODEL_LOADED 2020-02-15 20:17:36,619 [DEBUG] W-9000-noop_1.11 org.pytorch.serve.wlm.WorkerThread - W-9000-noop_1.11 State change WORKER_STARTED -> WORKER_MODEL_LOADED 2020-02-15 20:17:36,619 [DEBUG] W-9001-noop_1.11 org.pytorch.serve.wlm.WorkerThread - W-9001-noop_1.11 State change WORKER_STARTED -> WORKER_MODEL_LOADED 2020-02-15 20:17:36,619 [DEBUG] W-9002-noop_1.11 org.pytorch.serve.wlm.WorkerThread - W-9002-noop_1.11 State change WORKER_STARTED -> WORKER_MODEL_LOADED 2020-02-15 20:17:36,619 [INFO ] W-9003-noop_1.11 TS_METRICS - W-9003-noop_1.11.ms:288|#Level:Host|#hostname:ip-172-31-17-115,timestamp:1581797856 2020-02-15 20:17:36,619 [INFO ] W-9000-noop_1.11 TS_METRICS - W-9000-noop_1.11.ms:291|#Level:Host|#hostname:ip-172-31-17-115,timestamp:1581797856 2020-02-15 20:17:36,619 [INFO ] W-9002-noop_1.11 TS_METRICS - W-9002-noop_1.11.ms:288|#Level:Host|#hostname:ip-172-31-17-115,timestamp:1581797856 2020-02-15 20:17:36,619 [INFO ] W-9001-noop_1.11 TS_METRICS - W-9001-noop_1.11.ms:288|#Level:Host|#hostname:ip-172-31-17-115,timestamp:1581797856 Gradle suite > Gradle test > org.pytorch.serve.util.ConfigManagerTest.test PASSED Gradle suite > Gradle test > org.pytorch.serve.util.ConfigManagerTest.testNoEnvVars PASSED Gradle suite > Gradle test > org.pytorch.serve.CoverageTest.test PASSED Gradle suite > Gradle test > org.pytorch.serve.ModelServerTest.test STANDARD_OUT 2020-02-15 20:17:37,155 [INFO ] pool-1-thread-5 ACCESS_LOG - /127.0.0.1:37516 "GET /ping HTTP/1.1" 200 6 2020-02-15 20:17:37,155 [INFO ] pool-1-thread-5 TS_METRICS - Requests2XX.Count:1|#Level:Host|#hostname:ip-172-31-17-115,timestamp:null 2020-02-15 20:17:37,183 [INFO ] epollEventLoopGroup-3-1 ACCESS_LOG - /127.0.0.1:37516 "OPTIONS / HTTP/1.1" 200 17 2020-02-15 20:17:37,183 [INFO ] epollEventLoopGroup-3-1 TS_METRICS - Requests2XX.Count:1|#Level:Host|#hostname:ip-172-31-17-115,timestamp:null 2020-02-15 20:17:37,208 [INFO ] epollEventLoopGroup-3-2 ACCESS_LOG - 0.0.0.0 "OPTIONS / HTTP/1.1" 200 6 2020-02-15 20:17:37,208 [INFO ] epollEventLoopGroup-3-2 TS_METRICS - Requests2XX.Count:1|#Level:Host|#hostname:ip-172-31-17-115,timestamp:null 2020-02-15 20:17:37,220 [INFO ] epollEventLoopGroup-3-1 ACCESS_LOG - /127.0.0.1:37516 "GET /api-description HTTP/1.1" 200 5 2020-02-15 20:17:37,220 [INFO ] epollEventLoopGroup-3-1 TS_METRICS - Requests2XX.Count:1|#Level:Host|#hostname:ip-172-31-17-115,timestamp:null 2020-02-15 20:17:37,232 [INFO ] epollEventLoopGroup-3-1 ACCESS_LOG - /127.0.0.1:37516 "OPTIONS /predictions/noop HTTP/1.1" 200 4 2020-02-15 20:17:37,232 [INFO ] epollEventLoopGroup-3-1 TS_METRICS - Requests2XX.Count:1|#Level:Host|#hostname:ip-172-31-17-115,timestamp:null 2020-02-15 20:17:37,235 [DEBUG] epollEventLoopGroup-3-2 org.pytorch.serve.wlm.ModelVersionedRefs - Removed model: noop version: 1.11 2020-02-15 20:17:37,235 [DEBUG] epollEventLoopGroup-3-2 org.pytorch.serve.wlm.WorkerThread - W-9003-noop_1.11 State change WORKER_MODEL_LOADED -> WORKER_SCALED_DOWN 2020-02-15 20:17:37,236 [DEBUG] W-9003-noop_1.11 org.pytorch.serve.wlm.WorkerThread - Shutting down the thread .. Scaling down. 2020-02-15 20:17:37,236 [DEBUG] W-9003-noop_1.11 org.pytorch.serve.wlm.WorkerThread - W-9003-noop_1.11 State change WORKER_SCALED_DOWN -> WORKER_STOPPED 2020-02-15 20:17:37,236 [INFO ] epollEventLoopGroup-4-4 org.pytorch.serve.wlm.WorkerThread - 9003 Worker disconnected. WORKER_SCALED_DOWN 2020-02-15 20:17:37,236 [DEBUG] W-9003-noop_1.11 org.pytorch.serve.wlm.WorkerThread - Worker terminated due to scale-down call. 2020-02-15 20:17:37,237 [DEBUG] epollEventLoopGroup-3-2 org.pytorch.serve.wlm.WorkerThread - W-9002-noop_1.11 State change WORKER_MODEL_LOADED -> WORKER_SCALED_DOWN 2020-02-15 20:17:37,238 [INFO ] epollEventLoopGroup-4-2 org.pytorch.serve.wlm.WorkerThread - 9002 Worker disconnected. WORKER_SCALED_DOWN 2020-02-15 20:17:37,238 [WARN ] W-9002-noop_1.11 org.pytorch.serve.wlm.WorkerThread - Backend worker thread exception. java.lang.IllegalMonitorStateException at java.util.concurrent.locks.ReentrantLock$Sync.tryRelease(ReentrantLock.java:151) at java.util.concurrent.locks.AbstractQueuedSynchronizer.release(AbstractQueuedSynchronizer.java:1261) at java.util.concurrent.locks.ReentrantLock.unlock(ReentrantLock.java:457) at org.pytorch.serve.wlm.Model.pollBatch(Model.java:175) at org.pytorch.serve.wlm.BatchAggregator.getRequest(BatchAggregator.java:33) at org.pytorch.serve.wlm.WorkerThread.run(WorkerThread.java:123) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) 2020-02-15 20:17:37,239 [DEBUG] epollEventLoopGroup-3-2 org.pytorch.serve.wlm.WorkerThread - W-9001-noop_1.11 State change WORKER_MODEL_LOADED -> WORKER_SCALED_DOWN 2020-02-15 20:17:37,240 [DEBUG] W-9002-noop_1.11 org.pytorch.serve.wlm.WorkerThread - W-9002-noop_1.11 State change WORKER_SCALED_DOWN -> WORKER_STOPPED 2020-02-15 20:17:37,240 [DEBUG] W-9002-noop_1.11 org.pytorch.serve.wlm.WorkerThread - Worker terminated due to scale-down call. 2020-02-15 20:17:37,240 [WARN ] W-9001-noop_1.11 org.pytorch.serve.wlm.WorkerThread - Backend worker thread exception. java.lang.IllegalMonitorStateException at java.util.concurrent.locks.ReentrantLock$Sync.tryRelease(ReentrantLock.java:151) at java.util.concurrent.locks.AbstractQueuedSynchronizer.release(AbstractQueuedSynchronizer.java:1261) at java.util.concurrent.locks.ReentrantLock.unlock(ReentrantLock.java:457) at org.pytorch.serve.wlm.Model.pollBatch(Model.java:175) at org.pytorch.serve.wlm.BatchAggregator.getRequest(BatchAggregator.java:33) at org.pytorch.serve.wlm.WorkerThread.run(WorkerThread.java:123) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) 2020-02-15 20:17:37,240 [INFO ] epollEventLoopGroup-4-3 org.pytorch.serve.wlm.WorkerThread - 9001 Worker disconnected. WORKER_SCALED_DOWN 2020-02-15 20:17:37,241 [DEBUG] W-9001-noop_1.11 org.pytorch.serve.wlm.WorkerThread - W-9001-noop_1.11 State change WORKER_SCALED_DOWN -> WORKER_STOPPED 2020-02-15 20:17:37,241 [DEBUG] W-9001-noop_1.11 org.pytorch.serve.wlm.WorkerThread - Worker terminated due to scale-down call. 2020-02-15 20:17:37,241 [DEBUG] epollEventLoopGroup-3-2 org.pytorch.serve.wlm.WorkerThread - W-9000-noop_1.11 State change WORKER_MODEL_LOADED -> WORKER_SCALED_DOWN 2020-02-15 20:17:37,242 [DEBUG] W-9000-noop_1.11 org.pytorch.serve.wlm.WorkerThread - Shutting down the thread .. Scaling down. 2020-02-15 20:17:37,242 [DEBUG] W-9000-noop_1.11 org.pytorch.serve.wlm.WorkerThread - W-9000-noop_1.11 State change WORKER_SCALED_DOWN -> WORKER_STOPPED 2020-02-15 20:17:37,242 [INFO ] epollEventLoopGroup-4-1 org.pytorch.serve.wlm.WorkerThread - 9000 Worker disconnected. WORKER_SCALED_DOWN 2020-02-15 20:17:37,242 [DEBUG] W-9000-noop_1.11 org.pytorch.serve.wlm.WorkerThread - Worker terminated due to scale-down call. 2020-02-15 20:17:37,245 [INFO ] epollEventLoopGroup-3-2 org.pytorch.serve.wlm.ModelManager - Model noop unregistered. 2020-02-15 20:17:37,245 [INFO ] epollEventLoopGroup-3-2 ACCESS_LOG - 0.0.0.0 "DELETE /models/noop HTTP/1.1" 200 10 2020-02-15 20:17:37,245 [INFO ] epollEventLoopGroup-3-2 TS_METRICS - Requests2XX.Count:1|#Level:Host|#hostname:ip-172-31-17-115,timestamp:null 2020-02-15 20:17:37,248 [DEBUG] epollEventLoopGroup-3-2 org.pytorch.serve.wlm.ModelVersionedRefs - Adding new version 1.11 for model noop_v1.0 2020-02-15 20:17:37,248 [DEBUG] epollEventLoopGroup-3-2 org.pytorch.serve.wlm.ModelVersionedRefs - Setting default version to 1.11 for model noop_v1.0 2020-02-15 20:17:37,248 [INFO ] epollEventLoopGroup-3-2 org.pytorch.serve.wlm.ModelManager - Model noop_v1.0 loaded. 2020-02-15 20:17:37,249 [INFO ] epollEventLoopGroup-3-2 ACCESS_LOG - 0.0.0.0 "POST /models?url=noop.mar&model_name=noop_v1.0&runtime=python&synchronous=false HTTP/1.1" 200 2 2020-02-15 20:17:37,249 [INFO ] epollEventLoopGroup-3-2 TS_METRICS - Requests2XX.Count:1|#Level:Host|#hostname:ip-172-31-17-115,timestamp:null 2020-02-15 20:17:37,251 [DEBUG] epollEventLoopGroup-3-2 org.pytorch.serve.wlm.ModelManager - updateModel: noop_v1.0, count: 1 2020-02-15 20:17:37,318 [INFO ] W-9004-noop_v1.0_1.11-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Listening on port: /tmp/.ts.sock.9004 2020-02-15 20:17:37,319 [INFO ] W-9004-noop_v1.0_1.11-stdout org.pytorch.serve.wlm.WorkerLifeCycle - [PID]5915 2020-02-15 20:17:37,319 [INFO ] W-9004-noop_v1.0_1.11-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Torch worker started. 2020-02-15 20:17:37,319 [DEBUG] W-9004-noop_v1.0_1.11 org.pytorch.serve.wlm.WorkerThread - W-9004-noop_v1.0_1.11 State change null -> WORKER_STARTED 2020-02-15 20:17:37,319 [INFO ] W-9004-noop_v1.0_1.11-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Python runtime: 3.8.1 2020-02-15 20:17:37,319 [INFO ] W-9004-noop_v1.0_1.11 org.pytorch.serve.wlm.WorkerThread - Connecting to: /tmp/.ts.sock.9004 2020-02-15 20:17:37,320 [INFO ] W-9004-noop_v1.0_1.11-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Connection accepted: /tmp/.ts.sock.9004. 2020-02-15 20:17:37,324 [INFO ] W-9004-noop_v1.0_1.11 org.pytorch.serve.wlm.WorkerThread - Backend response time: 2 2020-02-15 20:17:37,324 [INFO ] W-9004-noop_v1.0_1.11 ACCESS_LOG - 0.0.0.0 "PUT /models/noop_v1.0?synchronous=true&min_worker=1" HTTP/1.1" 200 74 2020-02-15 20:17:37,324 [INFO ] W-9004-noop_v1.0_1.11 TS_METRICS - Requests2XX.Count:1|#Level:Host|#hostname:ip-172-31-17-115,timestamp:null 2020-02-15 20:17:37,324 [DEBUG] W-9004-noop_v1.0_1.11 org.pytorch.serve.wlm.WorkerThread - W-9004-noop_v1.0_1.11 State change WORKER_STARTED -> WORKER_MODEL_LOADED 2020-02-15 20:17:37,324 [INFO ] W-9004-noop_v1.0_1.11 TS_METRICS - W-9004-noop_v1.0_1.11.ms:73|#Level:Host|#hostname:ip-172-31-17-115,timestamp:1581797857 2020-02-15 20:17:37,326 [DEBUG] epollEventLoopGroup-3-2 org.pytorch.serve.wlm.ModelManager - updateModel: noop_v1.0, count: 2 2020-02-15 20:17:37,326 [INFO ] epollEventLoopGroup-3-2 ACCESS_LOG - 0.0.0.0 "PUT /models/noop_v1.0?min_worker=2 HTTP/1.1" 202 0 2020-02-15 20:17:37,326 [INFO ] epollEventLoopGroup-3-2 TS_METRICS - Requests2XX.Count:1|#Level:Host|#hostname:ip-172-31-17-115,timestamp:null 2020-02-15 20:17:37,329 [INFO ] epollEventLoopGroup-3-2 ACCESS_LOG - 0.0.0.0 "GET /models?limit=200&nextPageToken=X HTTP/1.1" 200 1 2020-02-15 20:17:37,329 [INFO ] epollEventLoopGroup-3-2 TS_METRICS - Requests2XX.Count:1|#Level:Host|#hostname:ip-172-31-17-115,timestamp:null 2020-02-15 20:17:37,334 [INFO ] epollEventLoopGroup-3-2 ACCESS_LOG - 0.0.0.0 "GET /models/noop_v1.0 HTTP/1.1" 200 3 2020-02-15 20:17:37,334 [INFO ] epollEventLoopGroup-3-2 TS_METRICS - Requests2XX.Count:1|#Level:Host|#hostname:ip-172-31-17-115,timestamp:null 2020-02-15 20:17:37,343 [INFO ] epollEventLoopGroup-3-2 org.pytorch.serve.archive.ModelArchive - model folder already exists: 29385dfc880480adb4ff8a26d9461d72539fb887 2020-02-15 20:17:37,343 [DEBUG] epollEventLoopGroup-3-2 org.pytorch.serve.wlm.ModelVersionedRefs - Adding new version 1.11 for model noop 2020-02-15 20:17:37,343 [DEBUG] epollEventLoopGroup-3-2 org.pytorch.serve.wlm.ModelVersionedRefs - Setting default version to 1.11 for model noop 2020-02-15 20:17:37,343 [INFO ] epollEventLoopGroup-3-2 org.pytorch.serve.wlm.ModelManager - Model noop loaded. 2020-02-15 20:17:37,344 [DEBUG] epollEventLoopGroup-3-2 org.pytorch.serve.wlm.ModelManager - updateModel: noop, count: 1 2020-02-15 20:17:37,392 [INFO ] W-9005-noop_v1.0_1.11-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Listening on port: /tmp/.ts.sock.9005 2020-02-15 20:17:37,392 [INFO ] W-9005-noop_v1.0_1.11-stdout org.pytorch.serve.wlm.WorkerLifeCycle - [PID]5920 2020-02-15 20:17:37,392 [INFO ] W-9005-noop_v1.0_1.11-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Torch worker started. 2020-02-15 20:17:37,392 [DEBUG] W-9005-noop_v1.0_1.11 org.pytorch.serve.wlm.WorkerThread - W-9005-noop_v1.0_1.11 State change null -> WORKER_STARTED 2020-02-15 20:17:37,392 [INFO ] W-9005-noop_v1.0_1.11-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Python runtime: 3.8.1 2020-02-15 20:17:37,392 [INFO ] W-9005-noop_v1.0_1.11 org.pytorch.serve.wlm.WorkerThread - Connecting to: /tmp/.ts.sock.9005 2020-02-15 20:17:37,393 [INFO ] W-9005-noop_v1.0_1.11-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Connection accepted: /tmp/.ts.sock.9005. 2020-02-15 20:17:37,394 [INFO ] W-9005-noop_v1.0_1.11 org.pytorch.serve.wlm.WorkerThread - Backend response time: 0 2020-02-15 20:17:37,394 [DEBUG] W-9005-noop_v1.0_1.11 org.pytorch.serve.wlm.WorkerThread - W-9005-noop_v1.0_1.11 State change WORKER_STARTED -> WORKER_MODEL_LOADED 2020-02-15 20:17:37,394 [INFO ] W-9005-noop_v1.0_1.11 TS_METRICS - W-9005-noop_v1.0_1.11.ms:68|#Level:Host|#hostname:ip-172-31-17-115,timestamp:1581797857 2020-02-15 20:17:37,410 [INFO ] W-9006-noop_1.11-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Listening on port: /tmp/.ts.sock.9006 2020-02-15 20:17:37,410 [INFO ] W-9006-noop_1.11-stdout org.pytorch.serve.wlm.WorkerLifeCycle - [PID]5923 2020-02-15 20:17:37,410 [INFO ] W-9006-noop_1.11-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Torch worker started. 2020-02-15 20:17:37,410 [DEBUG] W-9006-noop_1.11 org.pytorch.serve.wlm.WorkerThread - W-9006-noop_1.11 State change null -> WORKER_STARTED 2020-02-15 20:17:37,410 [INFO ] W-9006-noop_1.11-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Python runtime: 3.8.1 2020-02-15 20:17:37,410 [INFO ] W-9006-noop_1.11 org.pytorch.serve.wlm.WorkerThread - Connecting to: /tmp/.ts.sock.9006 2020-02-15 20:17:37,411 [INFO ] W-9006-noop_1.11-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Connection accepted: /tmp/.ts.sock.9006. 2020-02-15 20:17:37,412 [INFO ] W-9006-noop_1.11 org.pytorch.serve.wlm.WorkerThread - Backend response time: 1 2020-02-15 20:17:37,412 [INFO ] W-9006-noop_1.11 ACCESS_LOG - 0.0.0.0 "POST /models?url=noop.mar&model_name=noop&runtime=python&initial_workers=1&synchronous=true HTTP/1.1" 200 70 2020-02-15 20:17:37,412 [INFO ] W-9006-noop_1.11 TS_METRICS - Requests2XX.Count:1|#Level:Host|#hostname:ip-172-31-17-115,timestamp:null 2020-02-15 20:17:37,412 [DEBUG] W-9006-noop_1.11 org.pytorch.serve.wlm.WorkerThread - W-9006-noop_1.11 State change WORKER_STARTED -> WORKER_MODEL_LOADED 2020-02-15 20:17:37,412 [INFO ] W-9006-noop_1.11 TS_METRICS - W-9006-noop_1.11.ms:68|#Level:Host|#hostname:ip-172-31-17-115,timestamp:1581797857 2020-02-15 20:17:37,415 [INFO ] epollEventLoopGroup-3-2 org.pytorch.serve.archive.ModelArchive - model folder already exists: 29385dfc880480adb4ff8a26d9461d72539fb887 2020-02-15 20:17:37,416 [DEBUG] epollEventLoopGroup-3-2 org.pytorch.serve.wlm.ModelVersionedRefs - Adding new version 1.11 for model noopversioned 2020-02-15 20:17:37,416 [DEBUG] epollEventLoopGroup-3-2 org.pytorch.serve.wlm.ModelVersionedRefs - Setting default version to 1.11 for model noopversioned 2020-02-15 20:17:37,416 [INFO ] epollEventLoopGroup-3-2 org.pytorch.serve.wlm.ModelManager - Model noopversioned loaded. 2020-02-15 20:17:37,416 [DEBUG] epollEventLoopGroup-3-2 org.pytorch.serve.wlm.ModelManager - updateModel: noopversioned, count: 1 2020-02-15 20:17:37,482 [INFO ] W-9007-noopversioned_1.11-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Listening on port: /tmp/.ts.sock.9007 2020-02-15 20:17:37,482 [INFO ] W-9007-noopversioned_1.11-stdout org.pytorch.serve.wlm.WorkerLifeCycle - [PID]5930 2020-02-15 20:17:37,482 [INFO ] W-9007-noopversioned_1.11-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Torch worker started. 2020-02-15 20:17:37,482 [DEBUG] W-9007-noopversioned_1.11 org.pytorch.serve.wlm.WorkerThread - W-9007-noopversioned_1.11 State change null -> WORKER_STARTED 2020-02-15 20:17:37,482 [INFO ] W-9007-noopversioned_1.11-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Python runtime: 3.8.1 2020-02-15 20:17:37,482 [INFO ] W-9007-noopversioned_1.11 org.pytorch.serve.wlm.WorkerThread - Connecting to: /tmp/.ts.sock.9007 2020-02-15 20:17:37,483 [INFO ] W-9007-noopversioned_1.11-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Connection accepted: /tmp/.ts.sock.9007. 2020-02-15 20:17:37,484 [INFO ] W-9007-noopversioned_1.11 org.pytorch.serve.wlm.WorkerThread - Backend response time: 0 2020-02-15 20:17:37,485 [INFO ] W-9007-noopversioned_1.11 ACCESS_LOG - 0.0.0.0 "POST /models?url=noop.mar&model_name=noopversioned&runtime=python&initial_workers=1&synchronous=true HTTP/1.1" 200 70 2020-02-15 20:17:37,485 [INFO ] W-9007-noopversioned_1.11 TS_METRICS - Requests2XX.Count:1|#Level:Host|#hostname:ip-172-31-17-115,timestamp:null 2020-02-15 20:17:37,485 [DEBUG] W-9007-noopversioned_1.11 org.pytorch.serve.wlm.WorkerThread - W-9007-noopversioned_1.11 State change WORKER_STARTED -> WORKER_MODEL_LOADED 2020-02-15 20:17:37,485 [INFO ] W-9007-noopversioned_1.11 TS_METRICS - W-9007-noopversioned_1.11.ms:69|#Level:Host|#hostname:ip-172-31-17-115,timestamp:1581797857 2020-02-15 20:17:37,487 [DEBUG] epollEventLoopGroup-3-2 org.pytorch.serve.wlm.ModelVersionedRefs - Adding new version 1.21 for model noopversioned 2020-02-15 20:17:37,488 [DEBUG] epollEventLoopGroup-3-2 org.pytorch.serve.wlm.ModelVersionedRefs - Setting default version to 1.21 for model noopversioned 2020-02-15 20:17:37,488 [INFO ] epollEventLoopGroup-3-2 org.pytorch.serve.wlm.ModelManager - Model noopversioned loaded. 2020-02-15 20:17:37,488 [DEBUG] epollEventLoopGroup-3-2 org.pytorch.serve.wlm.ModelManager - updateModel: noopversioned, count: 1 2020-02-15 20:17:37,554 [INFO ] W-9008-noopversioned_1.21-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Listening on port: /tmp/.ts.sock.9008 2020-02-15 20:17:37,554 [INFO ] W-9008-noopversioned_1.21-stdout org.pytorch.serve.wlm.WorkerLifeCycle - [PID]5935 2020-02-15 20:17:37,554 [INFO ] W-9008-noopversioned_1.21-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Torch worker started. 2020-02-15 20:17:37,554 [DEBUG] W-9008-noopversioned_1.21 org.pytorch.serve.wlm.WorkerThread - W-9008-noopversioned_1.21 State change null -> WORKER_STARTED 2020-02-15 20:17:37,554 [INFO ] W-9008-noopversioned_1.21-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Python runtime: 3.8.1 2020-02-15 20:17:37,554 [INFO ] W-9008-noopversioned_1.21 org.pytorch.serve.wlm.WorkerThread - Connecting to: /tmp/.ts.sock.9008 2020-02-15 20:17:37,555 [INFO ] W-9008-noopversioned_1.21-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Connection accepted: /tmp/.ts.sock.9008. 2020-02-15 20:17:37,558 [INFO ] W-9008-noopversioned_1.21 org.pytorch.serve.wlm.WorkerThread - Backend response time: 2 2020-02-15 20:17:37,559 [INFO ] W-9008-noopversioned_1.21 ACCESS_LOG - 0.0.0.0 "POST /models?url=noop_v2.mar&model_name=noopversioned&runtime=python&initial_workers=1&synchronous=true HTTP/1.1" 200 73 2020-02-15 20:17:37,559 [INFO ] W-9008-noopversioned_1.21 TS_METRICS - Requests2XX.Count:1|#Level:Host|#hostname:ip-172-31-17-115,timestamp:null 2020-02-15 20:17:37,559 [DEBUG] W-9008-noopversioned_1.21 org.pytorch.serve.wlm.WorkerThread - W-9008-noopversioned_1.21 State change WORKER_STARTED -> WORKER_MODEL_LOADED 2020-02-15 20:17:37,559 [INFO ] W-9008-noopversioned_1.21 TS_METRICS - W-9008-noopversioned_1.21.ms:71|#Level:Host|#hostname:ip-172-31-17-115,timestamp:1581797857 2020-02-15 20:17:37,561 [INFO ] epollEventLoopGroup-3-2 ACCESS_LOG - 0.0.0.0 "GET /models/noopversioned HTTP/1.1" 200 1 2020-02-15 20:17:37,561 [INFO ] epollEventLoopGroup-3-2 TS_METRICS - Requests2XX.Count:1|#Level:Host|#hostname:ip-172-31-17-115,timestamp:null 2020-02-15 20:17:37,567 [INFO ] epollEventLoopGroup-3-2 ACCESS_LOG - 0.0.0.0 "GET /models/noopversioned/all HTTP/1.1" 200 0 2020-02-15 20:17:37,567 [INFO ] epollEventLoopGroup-3-2 TS_METRICS - Requests2XX.Count:1|#Level:Host|#hostname:ip-172-31-17-115,timestamp:null 2020-02-15 20:17:37,571 [INFO ] epollEventLoopGroup-3-2 ACCESS_LOG - 0.0.0.0 "GET /models/noopversioned/1.11 HTTP/1.1" 200 1 2020-02-15 20:17:37,571 [INFO ] epollEventLoopGroup-3-2 TS_METRICS - Requests2XX.Count:1|#Level:Host|#hostname:ip-172-31-17-115,timestamp:null 2020-02-15 20:17:37,585 [INFO ] W-9008-noopversioned_1.21 org.pytorch.serve.wlm.WorkerThread - Backend response time: 0 2020-02-15 20:17:37,585 [INFO ] W-9008-noopversioned_1.21-stdout MODEL_METRICS - PreprocessTime.Milliseconds:0.0|#ModelName:noopversioned,Level:Model|#hostname:ip-172-31-17-115,requestID:e0ec6f50-ea4b-461a-8de7-dce021ef5e0c,timestamp:1581797857 2020-02-15 20:17:37,586 [INFO ] W-9008-noopversioned_1.21-stdout MODEL_METRICS - InferenceTime.Milliseconds:0.0|#ModelName:noopversioned,Level:Model|#hostname:ip-172-31-17-115,requestID:e0ec6f50-ea4b-461a-8de7-dce021ef5e0c,timestamp:1581797857 2020-02-15 20:17:37,586 [INFO ] W-9008-noopversioned_1.21 ACCESS_LOG - /127.0.0.1:37516 "POST /predictions/noopversioned/1.21 HTTP/1.1" 200 13 2020-02-15 20:17:37,586 [INFO ] W-9008-noopversioned_1.21 TS_METRICS - Requests2XX.Count:1|#Level:Host|#hostname:ip-172-31-17-115,timestamp:null 2020-02-15 20:17:37,586 [INFO ] W-9008-noopversioned_1.21-stdout MODEL_METRICS - PostprocessTime.Milliseconds:0.0|#ModelName:noopversioned,Level:Model|#hostname:ip-172-31-17-115,requestID:e0ec6f50-ea4b-461a-8de7-dce021ef5e0c,timestamp:1581797857 2020-02-15 20:17:37,586 [DEBUG] W-9008-noopversioned_1.21 org.pytorch.serve.wlm.Job - Waiting time: 1, Backend time: 1 2020-02-15 20:17:37,586 [INFO ] W-9008-noopversioned_1.21-stdout MODEL_METRICS - PredictionTime.Milliseconds:0.08|#ModelName:noopversioned,Level:Model|#hostname:ip-172-31-17-115,requestID:e0ec6f50-ea4b-461a-8de7-dce021ef5e0c,timestamp:1581797857 2020-02-15 20:17:37,588 [DEBUG] epollEventLoopGroup-3-2 org.pytorch.serve.wlm.ModelVersionedRefs - Setting default version to 1.11 for model noopversioned 2020-02-15 20:17:37,589 [INFO ] epollEventLoopGroup-3-2 ACCESS_LOG - 0.0.0.0 "PUT /models/noopversioned/1.11/set-default HTTP/1.1" 200 1 2020-02-15 20:17:37,589 [INFO ] epollEventLoopGroup-3-2 TS_METRICS - Requests2XX.Count:1|#Level:Host|#hostname:ip-172-31-17-115,timestamp:null 2020-02-15 20:17:37,590 [DEBUG] epollEventLoopGroup-3-2 org.pytorch.serve.wlm.ModelVersionedRefs - Removed model: noop version: 1.11 2020-02-15 20:17:37,590 [DEBUG] epollEventLoopGroup-3-2 org.pytorch.serve.wlm.WorkerThread - W-9006-noop_1.11 State change WORKER_MODEL_LOADED -> WORKER_SCALED_DOWN 2020-02-15 20:17:37,590 [DEBUG] W-9006-noop_1.11 org.pytorch.serve.wlm.WorkerThread - Shutting down the thread .. Scaling down. 2020-02-15 20:17:37,590 [INFO ] epollEventLoopGroup-4-7 org.pytorch.serve.wlm.WorkerThread - 9006 Worker disconnected. WORKER_SCALED_DOWN 2020-02-15 20:17:37,590 [DEBUG] W-9006-noop_1.11 org.pytorch.serve.wlm.WorkerThread - W-9006-noop_1.11 State change WORKER_SCALED_DOWN -> WORKER_STOPPED 2020-02-15 20:17:37,590 [DEBUG] W-9006-noop_1.11 org.pytorch.serve.wlm.WorkerThread - Worker terminated due to scale-down call. 2020-02-15 20:17:37,592 [INFO ] epollEventLoopGroup-3-2 org.pytorch.serve.wlm.ModelManager - Model noop unregistered. 2020-02-15 20:17:37,592 [INFO ] epollEventLoopGroup-3-2 ACCESS_LOG - 0.0.0.0 "DELETE /models/noop HTTP/1.1" 200 2 2020-02-15 20:17:37,592 [INFO ] epollEventLoopGroup-3-2 TS_METRICS - Requests2XX.Count:1|#Level:Host|#hostname:ip-172-31-17-115,timestamp:null 2020-02-15 20:17:37,595 [DEBUG] epollEventLoopGroup-3-2 org.pytorch.serve.wlm.ModelVersionedRefs - Adding new version 1.11 for model noop 2020-02-15 20:17:37,595 [DEBUG] epollEventLoopGroup-3-2 org.pytorch.serve.wlm.ModelVersionedRefs - Setting default version to 1.11 for model noop 2020-02-15 20:17:37,595 [INFO ] epollEventLoopGroup-3-2 org.pytorch.serve.wlm.ModelManager - Model noop loaded. 2020-02-15 20:17:37,595 [DEBUG] epollEventLoopGroup-3-2 org.pytorch.serve.wlm.ModelManager - updateModel: noop, count: 1 2020-02-15 20:17:37,661 [INFO ] W-9009-noop_1.11-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Listening on port: /tmp/.ts.sock.9009 2020-02-15 20:17:37,661 [INFO ] W-9009-noop_1.11-stdout org.pytorch.serve.wlm.WorkerLifeCycle - [PID]5940 2020-02-15 20:17:37,661 [INFO ] W-9009-noop_1.11-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Torch worker started. 2020-02-15 20:17:37,661 [DEBUG] W-9009-noop_1.11 org.pytorch.serve.wlm.WorkerThread - W-9009-noop_1.11 State change null -> WORKER_STARTED 2020-02-15 20:17:37,661 [INFO ] W-9009-noop_1.11-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Python runtime: 3.8.1 2020-02-15 20:17:37,661 [INFO ] W-9009-noop_1.11 org.pytorch.serve.wlm.WorkerThread - Connecting to: /tmp/.ts.sock.9009 2020-02-15 20:17:37,662 [INFO ] W-9009-noop_1.11-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Connection accepted: /tmp/.ts.sock.9009. 2020-02-15 20:17:37,666 [INFO ] W-9009-noop_1.11 org.pytorch.serve.wlm.WorkerThread - Backend response time: 3 2020-02-15 20:17:37,666 [INFO ] W-9009-noop_1.11 ACCESS_LOG - 0.0.0.0 "POST /models HTTP/1.1" 200 73 2020-02-15 20:17:37,666 [INFO ] W-9009-noop_1.11 TS_METRICS - Requests2XX.Count:1|#Level:Host|#hostname:ip-172-31-17-115,timestamp:null 2020-02-15 20:17:37,666 [DEBUG] W-9009-noop_1.11 org.pytorch.serve.wlm.WorkerThread - W-9009-noop_1.11 State change WORKER_STARTED -> WORKER_MODEL_LOADED 2020-02-15 20:17:37,666 [INFO ] W-9009-noop_1.11 TS_METRICS - W-9009-noop_1.11.ms:71|#Level:Host|#hostname:ip-172-31-17-115,timestamp:1581797857 2020-02-15 20:17:37,669 [INFO ] W-9009-noop_1.11-stdout MODEL_METRICS - PreprocessTime.Milliseconds:0.0|#ModelName:noop,Level:Model|#hostname:ip-172-31-17-115,requestID:59af5964-6f97-422b-9110-44b0dedd65fd,timestamp:1581797857 2020-02-15 20:17:37,669 [INFO ] W-9009-noop_1.11 org.pytorch.serve.wlm.WorkerThread - Backend response time: 0 2020-02-15 20:17:37,670 [INFO ] W-9009-noop_1.11-stdout MODEL_METRICS - InferenceTime.Milliseconds:0.0|#ModelName:noop,Level:Model|#hostname:ip-172-31-17-115,requestID:59af5964-6f97-422b-9110-44b0dedd65fd,timestamp:1581797857 2020-02-15 20:17:37,670 [INFO ] W-9009-noop_1.11 ACCESS_LOG - /127.0.0.1:37516 "POST /predictions/noop HTTP/1.1" 200 2 2020-02-15 20:17:37,670 [INFO ] W-9009-noop_1.11 TS_METRICS - Requests2XX.Count:1|#Level:Host|#hostname:ip-172-31-17-115,timestamp:null 2020-02-15 20:17:37,670 [INFO ] W-9009-noop_1.11-stdout MODEL_METRICS - PostprocessTime.Milliseconds:0.0|#ModelName:noop,Level:Model|#hostname:ip-172-31-17-115,requestID:59af5964-6f97-422b-9110-44b0dedd65fd,timestamp:1581797857 2020-02-15 20:17:37,670 [INFO ] W-9009-noop_1.11-stdout MODEL_METRICS - PredictionTime.Milliseconds:0.08|#ModelName:noop,Level:Model|#hostname:ip-172-31-17-115,requestID:59af5964-6f97-422b-9110-44b0dedd65fd,timestamp:1581797857 2020-02-15 20:17:37,670 [DEBUG] W-9009-noop_1.11 org.pytorch.serve.wlm.Job - Waiting time: 0, Backend time: 1 2020-02-15 20:17:37,672 [INFO ] W-9009-noop_1.11-stdout MODEL_METRICS - PreprocessTime.Milliseconds:0.0|#ModelName:noop,Level:Model|#hostname:ip-172-31-17-115,requestID:53c7ffd7-5700-48d2-9fe4-1a5fe188b775,timestamp:1581797857 2020-02-15 20:17:37,673 [INFO ] W-9009-noop_1.11-stdout MODEL_METRICS - InferenceTime.Milliseconds:0.0|#ModelName:noop,Level:Model|#hostname:ip-172-31-17-115,requestID:53c7ffd7-5700-48d2-9fe4-1a5fe188b775,timestamp:1581797857 2020-02-15 20:17:37,673 [INFO ] W-9009-noop_1.11 org.pytorch.serve.wlm.WorkerThread - Backend response time: 1 2020-02-15 20:17:37,673 [INFO ] W-9009-noop_1.11-stdout MODEL_METRICS - PostprocessTime.Milliseconds:0.0|#ModelName:noop,Level:Model|#hostname:ip-172-31-17-115,requestID:53c7ffd7-5700-48d2-9fe4-1a5fe188b775,timestamp:1581797857 2020-02-15 20:17:37,673 [INFO ] W-9009-noop_1.11-stdout MODEL_METRICS - PredictionTime.Milliseconds:0.06|#ModelName:noop,Level:Model|#hostname:ip-172-31-17-115,requestID:53c7ffd7-5700-48d2-9fe4-1a5fe188b775,timestamp:1581797857 2020-02-15 20:17:37,673 [INFO ] W-9009-noop_1.11 ACCESS_LOG - /127.0.0.1:37516 "POST /predictions/noop HTTP/1.1" 200 1 2020-02-15 20:17:37,673 [INFO ] W-9009-noop_1.11 TS_METRICS - Requests2XX.Count:1|#Level:Host|#hostname:ip-172-31-17-115,timestamp:null 2020-02-15 20:17:37,673 [DEBUG] W-9009-noop_1.11 org.pytorch.serve.wlm.Job - Waiting time: 0, Backend time: 1 2020-02-15 20:17:37,676 [INFO ] W-9009-noop_1.11-stdout MODEL_METRICS - PreprocessTime.Milliseconds:0.0|#ModelName:noop,Level:Model|#hostname:ip-172-31-17-115,requestID:39d74a5a-aff9-478a-b19e-fb235fa208a2,timestamp:1581797857 2020-02-15 20:17:37,676 [INFO ] W-9009-noop_1.11 org.pytorch.serve.wlm.WorkerThread - Backend response time: 0 2020-02-15 20:17:37,677 [INFO ] W-9009-noop_1.11-stdout MODEL_METRICS - InferenceTime.Milliseconds:0.0|#ModelName:noop,Level:Model|#hostname:ip-172-31-17-115,requestID:39d74a5a-aff9-478a-b19e-fb235fa208a2,timestamp:1581797857 2020-02-15 20:17:37,677 [INFO ] W-9009-noop_1.11 ACCESS_LOG - /127.0.0.1:37516 "POST /predictions/noop HTTP/1.1" 200 1 2020-02-15 20:17:37,677 [INFO ] W-9009-noop_1.11 TS_METRICS - Requests2XX.Count:1|#Level:Host|#hostname:ip-172-31-17-115,timestamp:null 2020-02-15 20:17:37,677 [INFO ] W-9009-noop_1.11-stdout MODEL_METRICS - PostprocessTime.Milliseconds:0.0|#ModelName:noop,Level:Model|#hostname:ip-172-31-17-115,requestID:39d74a5a-aff9-478a-b19e-fb235fa208a2,timestamp:1581797857 2020-02-15 20:17:37,677 [DEBUG] W-9009-noop_1.11 org.pytorch.serve.wlm.Job - Waiting time: 0, Backend time: 1 2020-02-15 20:17:37,677 [INFO ] W-9009-noop_1.11-stdout MODEL_METRICS - PredictionTime.Milliseconds:0.06|#ModelName:noop,Level:Model|#hostname:ip-172-31-17-115,requestID:39d74a5a-aff9-478a-b19e-fb235fa208a2,timestamp:1581797857 2020-02-15 20:17:37,680 [INFO ] W-9009-noop_1.11-stdout MODEL_METRICS - PreprocessTime.Milliseconds:0.0|#ModelName:noop,Level:Model|#hostname:ip-172-31-17-115,requestID:74fa152c-3082-4425-9b9c-fe21977cb48b,timestamp:1581797857 2020-02-15 20:17:37,680 [INFO ] W-9009-noop_1.11 org.pytorch.serve.wlm.WorkerThread - Backend response time: 0 2020-02-15 20:17:37,680 [INFO ] W-9009-noop_1.11-stdout MODEL_METRICS - InferenceTime.Milliseconds:0.0|#ModelName:noop,Level:Model|#hostname:ip-172-31-17-115,requestID:74fa152c-3082-4425-9b9c-fe21977cb48b,timestamp:1581797857 2020-02-15 20:17:37,681 [INFO ] W-9009-noop_1.11-stdout MODEL_METRICS - PostprocessTime.Milliseconds:0.0|#ModelName:noop,Level:Model|#hostname:ip-172-31-17-115,requestID:74fa152c-3082-4425-9b9c-fe21977cb48b,timestamp:1581797857 2020-02-15 20:17:37,681 [INFO ] W-9009-noop_1.11 ACCESS_LOG - /127.0.0.1:37516 "POST /invocations?model_name=noop HTTP/1.1" 200 2 2020-02-15 20:17:37,681 [INFO ] W-9009-noop_1.11-stdout MODEL_METRICS - PredictionTime.Milliseconds:0.06|#ModelName:noop,Level:Model|#hostname:ip-172-31-17-115,requestID:74fa152c-3082-4425-9b9c-fe21977cb48b,timestamp:1581797857 2020-02-15 20:17:37,681 [INFO ] W-9009-noop_1.11 TS_METRICS - Requests2XX.Count:1|#Level:Host|#hostname:ip-172-31-17-115,timestamp:null 2020-02-15 20:17:37,681 [DEBUG] W-9009-noop_1.11 org.pytorch.serve.wlm.Job - Waiting time: 0, Backend time: 1 2020-02-15 20:17:37,704 [INFO ] W-9004-noop_v1.0_1.11 org.pytorch.serve.wlm.WorkerThread - Backend response time: 1 2020-02-15 20:17:37,704 [INFO ] W-9004-noop_v1.0_1.11 ACCESS_LOG - /127.0.0.1:37516 "POST /invocations HTTP/1.1" 200 10 2020-02-15 20:17:37,704 [INFO ] W-9004-noop_v1.0_1.11 TS_METRICS - Requests2XX.Count:1|#Level:Host|#hostname:ip-172-31-17-115,timestamp:null 2020-02-15 20:17:37,704 [INFO ] W-9004-noop_v1.0_1.11-stdout MODEL_METRICS - PreprocessTime.Milliseconds:0.0|#ModelName:noop_v1.0,Level:Model|#hostname:ip-172-31-17-115,requestID:f7ff268d-6f0b-45f4-bb7e-f293a1d492c0,timestamp:1581797857 2020-02-15 20:17:37,704 [DEBUG] W-9004-noop_v1.0_1.11 org.pytorch.serve.wlm.Job - Waiting time: 0, Backend time: 1 2020-02-15 20:17:37,704 [INFO ] W-9004-noop_v1.0_1.11-stdout MODEL_METRICS - InferenceTime.Milliseconds:0.0|#ModelName:noop_v1.0,Level:Model|#hostname:ip-172-31-17-115,requestID:f7ff268d-6f0b-45f4-bb7e-f293a1d492c0,timestamp:1581797857 2020-02-15 20:17:37,704 [INFO ] W-9004-noop_v1.0_1.11-stdout MODEL_METRICS - PostprocessTime.Milliseconds:0.0|#ModelName:noop_v1.0,Level:Model|#hostname:ip-172-31-17-115,requestID:f7ff268d-6f0b-45f4-bb7e-f293a1d492c0,timestamp:1581797857 2020-02-15 20:17:37,704 [INFO ] W-9004-noop_v1.0_1.11-stdout MODEL_METRICS - PredictionTime.Milliseconds:0.13|#ModelName:noop_v1.0,Level:Model|#hostname:ip-172-31-17-115,requestID:f7ff268d-6f0b-45f4-bb7e-f293a1d492c0,timestamp:1581797857 2020-02-15 20:17:37,707 [INFO ] W-9009-noop_1.11 org.pytorch.serve.wlm.WorkerThread - Backend response time: 1 2020-02-15 20:17:37,707 [INFO ] W-9009-noop_1.11-stdout MODEL_METRICS - PreprocessTime.Milliseconds:0.0|#ModelName:noop,Level:Model|#hostname:ip-172-31-17-115,requestID:c2954723-7a24-4fa9-99bc-ee5a9d1de239,timestamp:1581797857 2020-02-15 20:17:37,707 [INFO ] W-9009-noop_1.11 ACCESS_LOG - /127.0.0.1:37516 "POST /models/noop/invoke HTTP/1.1" 200 1 2020-02-15 20:17:37,707 [INFO ] W-9009-noop_1.11-stdout MODEL_METRICS - InferenceTime.Milliseconds:0.0|#ModelName:noop,Level:Model|#hostname:ip-172-31-17-115,requestID:c2954723-7a24-4fa9-99bc-ee5a9d1de239,timestamp:1581797857 2020-02-15 20:17:37,707 [INFO ] W-9009-noop_1.11 TS_METRICS - Requests2XX.Count:1|#Level:Host|#hostname:ip-172-31-17-115,timestamp:null 2020-02-15 20:17:37,707 [INFO ] W-9009-noop_1.11-stdout MODEL_METRICS - PostprocessTime.Milliseconds:0.0|#ModelName:noop,Level:Model|#hostname:ip-172-31-17-115,requestID:c2954723-7a24-4fa9-99bc-ee5a9d1de239,timestamp:1581797857 2020-02-15 20:17:37,707 [DEBUG] W-9009-noop_1.11 org.pytorch.serve.wlm.Job - Waiting time: 0, Backend time: 1 2020-02-15 20:17:37,707 [INFO ] W-9009-noop_1.11-stdout MODEL_METRICS - PredictionTime.Milliseconds:0.07|#ModelName:noop,Level:Model|#hostname:ip-172-31-17-115,requestID:c2954723-7a24-4fa9-99bc-ee5a9d1de239,timestamp:1581797857 2020-02-15 20:17:37,713 [INFO ] W-9009-noop_1.11-stdout MODEL_METRICS - PreprocessTime.Milliseconds:0.0|#ModelName:noop,Level:Model|#hostname:ip-172-31-17-115,requestID:d71c5a9e-cee2-49f8-ae8d-4ede4b45089c,timestamp:1581797857 2020-02-15 20:17:37,713 [INFO ] W-9009-noop_1.11 org.pytorch.serve.wlm.WorkerThread - Backend response time: 0 2020-02-15 20:17:37,713 [INFO ] W-9009-noop_1.11-stdout MODEL_METRICS - InferenceTime.Milliseconds:0.0|#ModelName:noop,Level:Model|#hostname:ip-172-31-17-115,requestID:d71c5a9e-cee2-49f8-ae8d-4ede4b45089c,timestamp:1581797857 2020-02-15 20:17:37,713 [INFO ] W-9009-noop_1.11 ACCESS_LOG - /127.0.0.1:37516 "POST /models/noop/invoke HTTP/1.1" 200 2 2020-02-15 20:17:37,713 [INFO ] W-9009-noop_1.11 TS_METRICS - Requests2XX.Count:1|#Level:Host|#hostname:ip-172-31-17-115,timestamp:null 2020-02-15 20:17:37,713 [INFO ] W-9009-noop_1.11-stdout MODEL_METRICS - PostprocessTime.Milliseconds:0.0|#ModelName:noop,Level:Model|#hostname:ip-172-31-17-115,requestID:d71c5a9e-cee2-49f8-ae8d-4ede4b45089c,timestamp:1581797857 2020-02-15 20:17:37,713 [DEBUG] W-9009-noop_1.11 org.pytorch.serve.wlm.Job - Waiting time: 0, Backend time: 1 2020-02-15 20:17:37,714 [INFO ] W-9009-noop_1.11-stdout MODEL_METRICS - PredictionTime.Milliseconds:0.06|#ModelName:noop,Level:Model|#hostname:ip-172-31-17-115,requestID:d71c5a9e-cee2-49f8-ae8d-4ede4b45089c,timestamp:1581797857 2020-02-15 20:17:37,716 [INFO ] W-9009-noop_1.11 org.pytorch.serve.wlm.WorkerThread - Backend response time: 0 2020-02-15 20:17:37,716 [INFO ] W-9009-noop_1.11-stdout MODEL_METRICS - PreprocessTime.Milliseconds:0.0|#ModelName:noop,Level:Model|#hostname:ip-172-31-17-115,requestID:6f94f90e-a7a4-4b8a-9b83-f0be80cf3d86,timestamp:1581797857 2020-02-15 20:17:37,716 [INFO ] W-9009-noop_1.11-stdout MODEL_METRICS - InferenceTime.Milliseconds:0.0|#ModelName:noop,Level:Model|#hostname:ip-172-31-17-115,requestID:6f94f90e-a7a4-4b8a-9b83-f0be80cf3d86,timestamp:1581797857 2020-02-15 20:17:37,716 [INFO ] W-9009-noop_1.11 ACCESS_LOG - /127.0.0.1:37516 "GET /noop/predict?data=test HTTP/1.1" 200 1 2020-02-15 20:17:37,716 [INFO ] W-9009-noop_1.11 TS_METRICS - Requests2XX.Count:1|#Level:Host|#hostname:ip-172-31-17-115,timestamp:null 2020-02-15 20:17:37,716 [INFO ] W-9009-noop_1.11-stdout MODEL_METRICS - PostprocessTime.Milliseconds:0.0|#ModelName:noop,Level:Model|#hostname:ip-172-31-17-115,requestID:6f94f90e-a7a4-4b8a-9b83-f0be80cf3d86,timestamp:1581797857 2020-02-15 20:17:37,716 [DEBUG] W-9009-noop_1.11 org.pytorch.serve.wlm.Job - Waiting time: 1, Backend time: 0 2020-02-15 20:17:37,716 [INFO ] W-9009-noop_1.11-stdout MODEL_METRICS - PredictionTime.Milliseconds:0.06|#ModelName:noop,Level:Model|#hostname:ip-172-31-17-115,requestID:6f94f90e-a7a4-4b8a-9b83-f0be80cf3d86,timestamp:1581797857 2020-02-15 20:17:38,434 [INFO ] W-9009-noop_1.11 org.pytorch.serve.wlm.WorkerThread - Backend response time: 0 2020-02-15 20:17:38,434 [INFO ] W-9009-noop_1.11-stdout MODEL_METRICS - PreprocessTime.Milliseconds:0.0|#ModelName:noop,Level:Model|#hostname:ip-172-31-17-115,requestID:50e7bb39-20e2-4450-b223-6392432c9a14,timestamp:1581797858 2020-02-15 20:17:38,434 [INFO ] W-9009-noop_1.11-stdout MODEL_METRICS - InferenceTime.Milliseconds:0.0|#ModelName:noop,Level:Model|#hostname:ip-172-31-17-115,requestID:50e7bb39-20e2-4450-b223-6392432c9a14,timestamp:1581797858 2020-02-15 20:17:38,435 [INFO ] W-9009-noop_1.11-stdout MODEL_METRICS - PostprocessTime.Milliseconds:0.0|#ModelName:noop,Level:Model|#hostname:ip-172-31-17-115,requestID:50e7bb39-20e2-4450-b223-6392432c9a14,timestamp:1581797858 2020-02-15 20:17:38,435 [INFO ] W-9009-noop_1.11-stdout MODEL_METRICS - PredictionTime.Milliseconds:0.09|#ModelName:noop,Level:Model|#hostname:ip-172-31-17-115,requestID:50e7bb39-20e2-4450-b223-6392432c9a14,timestamp:1581797858 2020-02-15 20:17:38,435 [INFO ] W-9009-noop_1.11 ACCESS_LOG - /127.0.0.1:37516 "POST /predictions/noop HTTP/1.1" 200 27 2020-02-15 20:17:38,435 [INFO ] W-9009-noop_1.11 TS_METRICS - Requests2XX.Count:1|#Level:Host|#hostname:ip-172-31-17-115,timestamp:null 2020-02-15 20:17:38,435 [DEBUG] W-9009-noop_1.11 org.pytorch.serve.wlm.Job - Waiting time: 0, Backend time: 19 2020-02-15 20:17:38,438 [DEBUG] epollEventLoopGroup-3-2 org.pytorch.serve.wlm.ModelVersionedRefs - Adding new version 1.0 for model noop-config 2020-02-15 20:17:38,438 [DEBUG] epollEventLoopGroup-3-2 org.pytorch.serve.wlm.ModelVersionedRefs - Setting default version to 1.0 for model noop-config 2020-02-15 20:17:38,438 [INFO ] epollEventLoopGroup-3-2 org.pytorch.serve.wlm.ModelManager - Model noop-config loaded. 2020-02-15 20:17:38,438 [DEBUG] epollEventLoopGroup-3-2 org.pytorch.serve.wlm.ModelManager - updateModel: noop-config, count: 1 2020-02-15 20:17:38,533 [INFO ] W-9010-noop-config_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Listening on port: /tmp/.ts.sock.9010 2020-02-15 20:17:38,534 [INFO ] W-9010-noop-config_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - [PID]5946 2020-02-15 20:17:38,534 [INFO ] W-9010-noop-config_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Torch worker started. 2020-02-15 20:17:38,534 [INFO ] W-9010-noop-config_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Python runtime: 3.8.1 2020-02-15 20:17:38,534 [DEBUG] W-9010-noop-config_1.0 org.pytorch.serve.wlm.WorkerThread - W-9010-noop-config_1.0 State change null -> WORKER_STARTED 2020-02-15 20:17:38,534 [INFO ] W-9010-noop-config_1.0 org.pytorch.serve.wlm.WorkerThread - Connecting to: /tmp/.ts.sock.9010 2020-02-15 20:17:38,535 [INFO ] W-9010-noop-config_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Connection accepted: /tmp/.ts.sock.9010. 2020-02-15 20:17:38,538 [INFO ] W-9010-noop-config_1.0 org.pytorch.serve.wlm.WorkerThread - Backend response time: 2 2020-02-15 20:17:38,538 [INFO ] W-9010-noop-config_1.0 ACCESS_LOG - 0.0.0.0 "POST /models?url=noop-v1.0-config-tests.mar&model_name=noop-config&initial_workers=1&synchronous=true HTTP/1.1" 200 101 2020-02-15 20:17:38,538 [INFO ] W-9010-noop-config_1.0 TS_METRICS - Requests2XX.Count:1|#Level:Host|#hostname:ip-172-31-17-115,timestamp:null 2020-02-15 20:17:38,538 [DEBUG] W-9010-noop-config_1.0 org.pytorch.serve.wlm.WorkerThread - W-9010-noop-config_1.0 State change WORKER_STARTED -> WORKER_MODEL_LOADED 2020-02-15 20:17:38,538 [INFO ] W-9010-noop-config_1.0 TS_METRICS - W-9010-noop-config_1.0.ms:100|#Level:Host|#hostname:ip-172-31-17-115,timestamp:1581797858 2020-02-15 20:17:38,541 [INFO ] W-9010-noop-config_1.0 org.pytorch.serve.wlm.WorkerThread - Backend response time: 1 2020-02-15 20:17:38,541 [INFO ] W-9010-noop-config_1.0-stdout MODEL_METRICS - PreprocessTime.Milliseconds:0.0|#ModelName:noop-config,Level:Model|#hostname:ip-172-31-17-115,requestID:5d691f59-f49a-43fb-ad86-802a2ac19e1a,timestamp:1581797858 2020-02-15 20:17:38,541 [INFO ] W-9010-noop-config_1.0 ACCESS_LOG - /127.0.0.1:37516 "POST /predictions/noop-config HTTP/1.1" 200 1 2020-02-15 20:17:38,541 [INFO ] W-9010-noop-config_1.0-stdout MODEL_METRICS - InferenceTime.Milliseconds:0.0|#ModelName:noop-config,Level:Model|#hostname:ip-172-31-17-115,requestID:5d691f59-f49a-43fb-ad86-802a2ac19e1a,timestamp:1581797858 2020-02-15 20:17:38,541 [INFO ] W-9010-noop-config_1.0 TS_METRICS - Requests2XX.Count:1|#Level:Host|#hostname:ip-172-31-17-115,timestamp:null 2020-02-15 20:17:38,541 [INFO ] W-9010-noop-config_1.0-stdout MODEL_METRICS - PostprocessTime.Milliseconds:0.01|#ModelName:noop-config,Level:Model|#hostname:ip-172-31-17-115,requestID:5d691f59-f49a-43fb-ad86-802a2ac19e1a,timestamp:1581797858 2020-02-15 20:17:38,541 [DEBUG] W-9010-noop-config_1.0 org.pytorch.serve.wlm.Job - Waiting time: 0, Backend time: 1 2020-02-15 20:17:38,541 [INFO ] W-9010-noop-config_1.0-stdout MODEL_METRICS - PredictionTime.Milliseconds:0.08|#ModelName:noop-config,Level:Model|#hostname:ip-172-31-17-115,requestID:5d691f59-f49a-43fb-ad86-802a2ac19e1a,timestamp:1581797858 2020-02-15 20:17:38,542 [DEBUG] epollEventLoopGroup-3-2 org.pytorch.serve.wlm.ModelVersionedRefs - Removed model: noop-config version: 1.0 2020-02-15 20:17:38,542 [DEBUG] epollEventLoopGroup-3-2 org.pytorch.serve.wlm.WorkerThread - W-9010-noop-config_1.0 State change WORKER_MODEL_LOADED -> WORKER_SCALED_DOWN 2020-02-15 20:17:38,542 [DEBUG] W-9010-noop-config_1.0 org.pytorch.serve.wlm.WorkerThread - Shutting down the thread .. Scaling down. 2020-02-15 20:17:38,542 [INFO ] epollEventLoopGroup-4-11 org.pytorch.serve.wlm.WorkerThread - 9010 Worker disconnected. WORKER_SCALED_DOWN 2020-02-15 20:17:38,542 [DEBUG] W-9010-noop-config_1.0 org.pytorch.serve.wlm.WorkerThread - W-9010-noop-config_1.0 State change WORKER_SCALED_DOWN -> WORKER_STOPPED 2020-02-15 20:17:38,542 [DEBUG] W-9010-noop-config_1.0 org.pytorch.serve.wlm.WorkerThread - Worker terminated due to scale-down call. 2020-02-15 20:17:38,543 [INFO ] epollEventLoopGroup-3-2 org.pytorch.serve.wlm.ModelManager - Model noop-config unregistered. 2020-02-15 20:17:38,543 [INFO ] epollEventLoopGroup-3-2 ACCESS_LOG - 0.0.0.0 "DELETE /models/noop-config HTTP/1.1" 200 1 2020-02-15 20:17:38,543 [INFO ] epollEventLoopGroup-3-2 TS_METRICS - Requests2XX.Count:1|#Level:Host|#hostname:ip-172-31-17-115,timestamp:null 2020-02-15 20:17:38,546 [DEBUG] epollEventLoopGroup-3-2 org.pytorch.serve.wlm.ModelVersionedRefs - Adding new version 1.0 for model noop-config 2020-02-15 20:17:38,546 [DEBUG] epollEventLoopGroup-3-2 org.pytorch.serve.wlm.ModelVersionedRefs - Setting default version to 1.0 for model noop-config 2020-02-15 20:17:38,546 [INFO ] epollEventLoopGroup-3-2 org.pytorch.serve.wlm.ModelManager - Model noop-config loaded. 2020-02-15 20:17:38,546 [DEBUG] epollEventLoopGroup-3-2 org.pytorch.serve.wlm.ModelManager - updateModel: noop-config, count: 1 2020-02-15 20:17:38,610 [INFO ] W-9011-noop-config_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Listening on port: /tmp/.ts.sock.9011 2020-02-15 20:17:38,611 [INFO ] W-9011-noop-config_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - [PID]5952 2020-02-15 20:17:38,611 [INFO ] W-9011-noop-config_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Torch worker started. 2020-02-15 20:17:38,611 [INFO ] W-9011-noop-config_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Python runtime: 3.8.1 2020-02-15 20:17:38,611 [DEBUG] W-9011-noop-config_1.0 org.pytorch.serve.wlm.WorkerThread - W-9011-noop-config_1.0 State change null -> WORKER_STARTED 2020-02-15 20:17:38,611 [INFO ] W-9011-noop-config_1.0 org.pytorch.serve.wlm.WorkerThread - Connecting to: /tmp/.ts.sock.9011 2020-02-15 20:17:38,611 [INFO ] W-9011-noop-config_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Connection accepted: /tmp/.ts.sock.9011. 2020-02-15 20:17:38,614 [INFO ] W-9011-noop-config_1.0 org.pytorch.serve.wlm.WorkerThread - Backend response time: 2 2020-02-15 20:17:38,615 [INFO ] W-9011-noop-config_1.0 ACCESS_LOG - 0.0.0.0 "POST /models?url=noop-v1.0-config-tests.mar&model_name=noop-config&initial_workers=1&synchronous=true HTTP/1.1" 200 70 2020-02-15 20:17:38,615 [INFO ] W-9011-noop-config_1.0 TS_METRICS - Requests2XX.Count:1|#Level:Host|#hostname:ip-172-31-17-115,timestamp:null 2020-02-15 20:17:38,615 [DEBUG] W-9011-noop-config_1.0 org.pytorch.serve.wlm.WorkerThread - W-9011-noop-config_1.0 State change WORKER_STARTED -> WORKER_MODEL_LOADED 2020-02-15 20:17:38,615 [INFO ] W-9011-noop-config_1.0 TS_METRICS - W-9011-noop-config_1.0.ms:69|#Level:Host|#hostname:ip-172-31-17-115,timestamp:1581797858 2020-02-15 20:17:38,617 [INFO ] W-9011-noop-config_1.0-stdout MODEL_METRICS - PreprocessTime.Milliseconds:0.0|#ModelName:noop-config,Level:Model|#hostname:ip-172-31-17-115,requestID:c5523c2d-f70f-43ae-a84e-77e534a4bf30,timestamp:1581797858 2020-02-15 20:17:38,617 [INFO ] W-9011-noop-config_1.0 org.pytorch.serve.wlm.WorkerThread - Backend response time: 1 2020-02-15 20:17:38,617 [INFO ] W-9011-noop-config_1.0-stdout MODEL_METRICS - InferenceTime.Milliseconds:0.0|#ModelName:noop-config,Level:Model|#hostname:ip-172-31-17-115,requestID:c5523c2d-f70f-43ae-a84e-77e534a4bf30,timestamp:1581797858 2020-02-15 20:17:38,617 [INFO ] W-9011-noop-config_1.0 ACCESS_LOG - /127.0.0.1:37516 "POST /predictions/noop-config HTTP/1.1" 200 1 2020-02-15 20:17:38,617 [INFO ] W-9011-noop-config_1.0-stdout MODEL_METRICS - PostprocessTime.Milliseconds:0.01|#ModelName:noop-config,Level:Model|#hostname:ip-172-31-17-115,requestID:c5523c2d-f70f-43ae-a84e-77e534a4bf30,timestamp:1581797858 2020-02-15 20:17:38,617 [INFO ] W-9011-noop-config_1.0 TS_METRICS - Requests2XX.Count:1|#Level:Host|#hostname:ip-172-31-17-115,timestamp:null 2020-02-15 20:17:38,617 [INFO ] W-9011-noop-config_1.0-stdout MODEL_METRICS - PredictionTime.Milliseconds:0.08|#ModelName:noop-config,Level:Model|#hostname:ip-172-31-17-115,requestID:c5523c2d-f70f-43ae-a84e-77e534a4bf30,timestamp:1581797858 2020-02-15 20:17:38,617 [DEBUG] W-9011-noop-config_1.0 org.pytorch.serve.wlm.Job - Waiting time: 0, Backend time: 1 2020-02-15 20:17:38,618 [DEBUG] epollEventLoopGroup-3-2 org.pytorch.serve.wlm.ModelVersionedRefs - Removed model: noop-config version: 1.0 2020-02-15 20:17:38,618 [DEBUG] epollEventLoopGroup-3-2 org.pytorch.serve.wlm.WorkerThread - W-9011-noop-config_1.0 State change WORKER_MODEL_LOADED -> WORKER_SCALED_DOWN 2020-02-15 20:17:38,618 [DEBUG] W-9011-noop-config_1.0 org.pytorch.serve.wlm.WorkerThread - Shutting down the thread .. Scaling down. 2020-02-15 20:17:38,618 [DEBUG] W-9011-noop-config_1.0 org.pytorch.serve.wlm.WorkerThread - W-9011-noop-config_1.0 State change WORKER_SCALED_DOWN -> WORKER_STOPPED 2020-02-15 20:17:38,618 [INFO ] epollEventLoopGroup-4-12 org.pytorch.serve.wlm.WorkerThread - 9011 Worker disconnected. WORKER_SCALED_DOWN 2020-02-15 20:17:38,618 [DEBUG] W-9011-noop-config_1.0 org.pytorch.serve.wlm.WorkerThread - Worker terminated due to scale-down call. 2020-02-15 20:17:38,619 [INFO ] epollEventLoopGroup-3-2 org.pytorch.serve.wlm.ModelManager - Model noop-config unregistered. 2020-02-15 20:17:38,620 [INFO ] epollEventLoopGroup-3-2 ACCESS_LOG - 0.0.0.0 "DELETE /models/noop-config HTTP/1.1" 200 2 2020-02-15 20:17:38,620 [INFO ] epollEventLoopGroup-3-2 TS_METRICS - Requests2XX.Count:1|#Level:Host|#hostname:ip-172-31-17-115,timestamp:null 2020-02-15 20:17:38,621 [DEBUG] epollEventLoopGroup-3-2 org.pytorch.serve.wlm.ModelVersionedRefs - Adding new version 1.0 for model respheader 2020-02-15 20:17:38,621 [DEBUG] epollEventLoopGroup-3-2 org.pytorch.serve.wlm.ModelVersionedRefs - Setting default version to 1.0 for model respheader 2020-02-15 20:17:38,621 [INFO ] epollEventLoopGroup-3-2 org.pytorch.serve.wlm.ModelManager - Model respheader loaded. 2020-02-15 20:17:38,621 [DEBUG] epollEventLoopGroup-3-2 org.pytorch.serve.wlm.ModelManager - updateModel: respheader, count: 1 2020-02-15 20:17:38,686 [INFO ] W-9012-respheader_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Listening on port: /tmp/.ts.sock.9012 2020-02-15 20:17:38,686 [INFO ] W-9012-respheader_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - [PID]5957 2020-02-15 20:17:38,686 [INFO ] W-9012-respheader_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Torch worker started. 2020-02-15 20:17:38,686 [INFO ] W-9012-respheader_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Python runtime: 3.8.1 2020-02-15 20:17:38,686 [DEBUG] W-9012-respheader_1.0 org.pytorch.serve.wlm.WorkerThread - W-9012-respheader_1.0 State change null -> WORKER_STARTED 2020-02-15 20:17:38,686 [INFO ] W-9012-respheader_1.0 org.pytorch.serve.wlm.WorkerThread - Connecting to: /tmp/.ts.sock.9012 2020-02-15 20:17:38,687 [INFO ] W-9012-respheader_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Connection accepted: /tmp/.ts.sock.9012. 2020-02-15 20:17:38,690 [INFO ] W-9012-respheader_1.0 org.pytorch.serve.wlm.WorkerThread - Backend response time: 3 2020-02-15 20:17:38,690 [INFO ] W-9012-respheader_1.0 ACCESS_LOG - 0.0.0.0 "POST /models?url=respheader-test.mar&model_name=respheader&initial_workers=1&synchronous=true HTTP/1.1" 200 69 2020-02-15 20:17:38,690 [INFO ] W-9012-respheader_1.0 TS_METRICS - Requests2XX.Count:1|#Level:Host|#hostname:ip-172-31-17-115,timestamp:null 2020-02-15 20:17:38,690 [DEBUG] W-9012-respheader_1.0 org.pytorch.serve.wlm.WorkerThread - W-9012-respheader_1.0 State change WORKER_STARTED -> WORKER_MODEL_LOADED 2020-02-15 20:17:38,690 [INFO ] W-9012-respheader_1.0 TS_METRICS - W-9012-respheader_1.0.ms:69|#Level:Host|#hostname:ip-172-31-17-115,timestamp:1581797858 2020-02-15 20:17:38,692 [INFO ] W-9012-respheader_1.0-stdout MODEL_METRICS - PredictionTime.Milliseconds:0.03|#ModelName:respheader,Level:Model|#hostname:ip-172-31-17-115,requestID:ce72bb89-f46b-48a6-a2f3-e4717437b653,timestamp:1581797858 2020-02-15 20:17:38,692 [INFO ] W-9012-respheader_1.0 org.pytorch.serve.wlm.WorkerThread - Backend response time: 1 2020-02-15 20:17:38,692 [INFO ] W-9012-respheader_1.0 ACCESS_LOG - /127.0.0.1:37516 "POST /predictions/respheader HTTP/1.1" 200 1 2020-02-15 20:17:38,692 [INFO ] W-9012-respheader_1.0 TS_METRICS - Requests2XX.Count:1|#Level:Host|#hostname:ip-172-31-17-115,timestamp:null 2020-02-15 20:17:38,692 [DEBUG] W-9012-respheader_1.0 org.pytorch.serve.wlm.Job - Waiting time: 0, Backend time: 1 2020-02-15 20:17:38,693 [DEBUG] epollEventLoopGroup-3-2 org.pytorch.serve.wlm.ModelVersionedRefs - Removed model: respheader version: 1.0 2020-02-15 20:17:38,693 [DEBUG] epollEventLoopGroup-3-2 org.pytorch.serve.wlm.WorkerThread - W-9012-respheader_1.0 State change WORKER_MODEL_LOADED -> WORKER_SCALED_DOWN 2020-02-15 20:17:38,693 [DEBUG] W-9012-respheader_1.0 org.pytorch.serve.wlm.WorkerThread - Shutting down the thread .. Scaling down. 2020-02-15 20:17:38,693 [DEBUG] W-9012-respheader_1.0 org.pytorch.serve.wlm.WorkerThread - W-9012-respheader_1.0 State change WORKER_SCALED_DOWN -> WORKER_STOPPED 2020-02-15 20:17:38,693 [INFO ] epollEventLoopGroup-4-13 org.pytorch.serve.wlm.WorkerThread - 9012 Worker disconnected. WORKER_SCALED_DOWN 2020-02-15 20:17:38,693 [DEBUG] W-9012-respheader_1.0 org.pytorch.serve.wlm.WorkerThread - Worker terminated due to scale-down call. 2020-02-15 20:17:38,694 [INFO ] epollEventLoopGroup-3-2 org.pytorch.serve.wlm.ModelManager - Model respheader unregistered. 2020-02-15 20:17:38,694 [INFO ] epollEventLoopGroup-3-2 ACCESS_LOG - 0.0.0.0 "DELETE /models/respheader HTTP/1.1" 200 1 2020-02-15 20:17:38,694 [INFO ] epollEventLoopGroup-3-2 TS_METRICS - Requests2XX.Count:1|#Level:Host|#hostname:ip-172-31-17-115,timestamp:null 2020-02-15 20:17:38,696 [DEBUG] epollEventLoopGroup-3-2 org.pytorch.serve.wlm.ModelVersionedRefs - Adding new version 1.0 for model nomanifest 2020-02-15 20:17:38,696 [DEBUG] epollEventLoopGroup-3-2 org.pytorch.serve.wlm.ModelVersionedRefs - Setting default version to 1.0 for model nomanifest 2020-02-15 20:17:38,696 [INFO ] epollEventLoopGroup-3-2 org.pytorch.serve.wlm.ModelManager - Model nomanifest loaded. 2020-02-15 20:17:38,696 [DEBUG] epollEventLoopGroup-3-2 org.pytorch.serve.wlm.ModelManager - updateModel: nomanifest, count: 1 2020-02-15 20:17:38,760 [INFO ] W-9013-nomanifest_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Listening on port: /tmp/.ts.sock.9013 2020-02-15 20:17:38,760 [INFO ] W-9013-nomanifest_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - [PID]5962 2020-02-15 20:17:38,760 [INFO ] W-9013-nomanifest_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Torch worker started. 2020-02-15 20:17:38,760 [DEBUG] W-9013-nomanifest_1.0 org.pytorch.serve.wlm.WorkerThread - W-9013-nomanifest_1.0 State change null -> WORKER_STARTED 2020-02-15 20:17:38,760 [INFO ] W-9013-nomanifest_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Python runtime: 3.8.1 2020-02-15 20:17:38,760 [INFO ] W-9013-nomanifest_1.0 org.pytorch.serve.wlm.WorkerThread - Connecting to: /tmp/.ts.sock.9013 2020-02-15 20:17:38,761 [INFO ] W-9013-nomanifest_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Connection accepted: /tmp/.ts.sock.9013. 2020-02-15 20:17:38,763 [INFO ] W-9013-nomanifest_1.0 org.pytorch.serve.wlm.WorkerThread - Backend response time: 2 2020-02-15 20:17:38,763 [INFO ] W-9013-nomanifest_1.0 ACCESS_LOG - 0.0.0.0 "POST /models?url=noop-no-manifest.mar&model_name=nomanifest&initial_workers=1&synchronous=true HTTP/1.1" 200 68 2020-02-15 20:17:38,763 [INFO ] W-9013-nomanifest_1.0 TS_METRICS - Requests2XX.Count:1|#Level:Host|#hostname:ip-172-31-17-115,timestamp:null 2020-02-15 20:17:38,764 [DEBUG] W-9013-nomanifest_1.0 org.pytorch.serve.wlm.WorkerThread - W-9013-nomanifest_1.0 State change WORKER_STARTED -> WORKER_MODEL_LOADED 2020-02-15 20:17:38,764 [INFO ] W-9013-nomanifest_1.0 TS_METRICS - W-9013-nomanifest_1.0.ms:68|#Level:Host|#hostname:ip-172-31-17-115,timestamp:1581797858 2020-02-15 20:17:38,765 [INFO ] W-9013-nomanifest_1.0 org.pytorch.serve.wlm.WorkerThread - Backend response time: 0 2020-02-15 20:17:38,765 [INFO ] W-9013-nomanifest_1.0-stdout MODEL_METRICS - PredictionTime.Milliseconds:0.01|#ModelName:nomanifest,Level:Model|#hostname:ip-172-31-17-115,requestID:8e710dcd-6012-438c-9dc1-37095eb42009,timestamp:1581797858 2020-02-15 20:17:38,765 [INFO ] W-9013-nomanifest_1.0 ACCESS_LOG - /127.0.0.1:37516 "POST /predictions/nomanifest HTTP/1.1" 200 1 2020-02-15 20:17:38,765 [INFO ] W-9013-nomanifest_1.0 TS_METRICS - Requests2XX.Count:1|#Level:Host|#hostname:ip-172-31-17-115,timestamp:null 2020-02-15 20:17:38,765 [DEBUG] W-9013-nomanifest_1.0 org.pytorch.serve.wlm.Job - Waiting time: 0, Backend time: 0 2020-02-15 20:17:38,766 [DEBUG] epollEventLoopGroup-3-2 org.pytorch.serve.wlm.ModelVersionedRefs - Removed model: nomanifest version: 1.0 2020-02-15 20:17:38,766 [DEBUG] epollEventLoopGroup-3-2 org.pytorch.serve.wlm.WorkerThread - W-9013-nomanifest_1.0 State change WORKER_MODEL_LOADED -> WORKER_SCALED_DOWN 2020-02-15 20:17:38,766 [DEBUG] W-9013-nomanifest_1.0 org.pytorch.serve.wlm.WorkerThread - Shutting down the thread .. Scaling down. 2020-02-15 20:17:38,766 [INFO ] epollEventLoopGroup-4-14 org.pytorch.serve.wlm.WorkerThread - 9013 Worker disconnected. WORKER_SCALED_DOWN 2020-02-15 20:17:38,766 [DEBUG] W-9013-nomanifest_1.0 org.pytorch.serve.wlm.WorkerThread - W-9013-nomanifest_1.0 State change WORKER_SCALED_DOWN -> WORKER_STOPPED 2020-02-15 20:17:38,766 [DEBUG] W-9013-nomanifest_1.0 org.pytorch.serve.wlm.WorkerThread - Worker terminated due to scale-down call. 2020-02-15 20:17:38,767 [INFO ] epollEventLoopGroup-3-2 org.pytorch.serve.wlm.ModelManager - Model nomanifest unregistered. 2020-02-15 20:17:38,768 [INFO ] epollEventLoopGroup-3-2 ACCESS_LOG - 0.0.0.0 "DELETE /models/nomanifest HTTP/1.1" 200 2 2020-02-15 20:17:38,768 [INFO ] epollEventLoopGroup-3-2 TS_METRICS - Requests2XX.Count:1|#Level:Host|#hostname:ip-172-31-17-115,timestamp:null 2020-02-15 20:17:38,769 [INFO ] epollEventLoopGroup-3-2 org.pytorch.serve.archive.ModelArchive - model folder already exists: 29385dfc880480adb4ff8a26d9461d72539fb887 2020-02-15 20:17:38,769 [DEBUG] epollEventLoopGroup-3-2 org.pytorch.serve.wlm.ModelVersionedRefs - Adding new version 1.11 for model noop_default_model_workers 2020-02-15 20:17:38,769 [DEBUG] epollEventLoopGroup-3-2 org.pytorch.serve.wlm.ModelVersionedRefs - Setting default version to 1.11 for model noop_default_model_workers 2020-02-15 20:17:38,769 [INFO ] epollEventLoopGroup-3-2 org.pytorch.serve.wlm.ModelManager - Model noop_default_model_workers loaded. 2020-02-15 20:17:38,769 [DEBUG] epollEventLoopGroup-3-2 org.pytorch.serve.wlm.ModelManager - updateModel: noop_default_model_workers, count: 1 2020-02-15 20:17:38,834 [INFO ] W-9014-noop_default_model_workers_1.11-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Listening on port: /tmp/.ts.sock.9014 2020-02-15 20:17:38,834 [INFO ] W-9014-noop_default_model_workers_1.11-stdout org.pytorch.serve.wlm.WorkerLifeCycle - [PID]5967 2020-02-15 20:17:38,834 [INFO ] W-9014-noop_default_model_workers_1.11-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Torch worker started. 2020-02-15 20:17:38,834 [DEBUG] W-9014-noop_default_model_workers_1.11 org.pytorch.serve.wlm.WorkerThread - W-9014-noop_default_model_workers_1.11 State change null -> WORKER_STARTED 2020-02-15 20:17:38,834 [INFO ] W-9014-noop_default_model_workers_1.11-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Python runtime: 3.8.1 2020-02-15 20:17:38,834 [INFO ] W-9014-noop_default_model_workers_1.11 org.pytorch.serve.wlm.WorkerThread - Connecting to: /tmp/.ts.sock.9014 2020-02-15 20:17:38,835 [INFO ] W-9014-noop_default_model_workers_1.11-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Connection accepted: /tmp/.ts.sock.9014. 2020-02-15 20:17:38,836 [INFO ] W-9014-noop_default_model_workers_1.11 org.pytorch.serve.wlm.WorkerThread - Backend response time: 0 2020-02-15 20:17:38,836 [INFO ] W-9014-noop_default_model_workers_1.11 ACCESS_LOG - 0.0.0.0 "POST /models?url=noop.mar&model_name=noop_default_model_workers&initial_workers=1&synchronous=true HTTP/1.1" 200 68 2020-02-15 20:17:38,836 [INFO ] W-9014-noop_default_model_workers_1.11 TS_METRICS - Requests2XX.Count:1|#Level:Host|#hostname:ip-172-31-17-115,timestamp:null 2020-02-15 20:17:38,836 [DEBUG] W-9014-noop_default_model_workers_1.11 org.pytorch.serve.wlm.WorkerThread - W-9014-noop_default_model_workers_1.11 State change WORKER_STARTED -> WORKER_MODEL_LOADED 2020-02-15 20:17:38,836 [INFO ] W-9014-noop_default_model_workers_1.11 TS_METRICS - W-9014-noop_default_model_workers_1.11.ms:66|#Level:Host|#hostname:ip-172-31-17-115,timestamp:1581797858 2020-02-15 20:17:38,838 [INFO ] epollEventLoopGroup-3-2 ACCESS_LOG - 0.0.0.0 "GET /models/noop_default_model_workers HTTP/1.1" 200 1 2020-02-15 20:17:38,838 [INFO ] epollEventLoopGroup-3-2 TS_METRICS - Requests2XX.Count:1|#Level:Host|#hostname:ip-172-31-17-115,timestamp:null 2020-02-15 20:17:38,839 [DEBUG] epollEventLoopGroup-3-2 org.pytorch.serve.wlm.ModelVersionedRefs - Removed model: noop_default_model_workers version: 1.11 2020-02-15 20:17:38,839 [DEBUG] epollEventLoopGroup-3-2 org.pytorch.serve.wlm.WorkerThread - W-9014-noop_default_model_workers_1.11 State change WORKER_MODEL_LOADED -> WORKER_SCALED_DOWN 2020-02-15 20:17:38,839 [DEBUG] W-9014-noop_default_model_workers_1.11 org.pytorch.serve.wlm.WorkerThread - Shutting down the thread .. Scaling down. 2020-02-15 20:17:38,839 [INFO ] epollEventLoopGroup-4-15 org.pytorch.serve.wlm.WorkerThread - 9014 Worker disconnected. WORKER_SCALED_DOWN 2020-02-15 20:17:38,839 [DEBUG] W-9014-noop_default_model_workers_1.11 org.pytorch.serve.wlm.WorkerThread - W-9014-noop_default_model_workers_1.11 State change WORKER_SCALED_DOWN -> WORKER_STOPPED 2020-02-15 20:17:38,839 [DEBUG] W-9014-noop_default_model_workers_1.11 org.pytorch.serve.wlm.WorkerThread - Worker terminated due to scale-down call. 2020-02-15 20:17:38,841 [INFO ] epollEventLoopGroup-3-2 org.pytorch.serve.wlm.ModelManager - Model noop_default_model_workers unregistered. 2020-02-15 20:17:38,841 [INFO ] epollEventLoopGroup-3-2 ACCESS_LOG - 0.0.0.0 "DELETE /models/noop_default_model_workers HTTP/1.1" 200 2 2020-02-15 20:17:38,841 [INFO ] epollEventLoopGroup-3-2 TS_METRICS - Requests2XX.Count:1|#Level:Host|#hostname:ip-172-31-17-115,timestamp:null 2020-02-15 20:17:38,844 [DEBUG] epollEventLoopGroup-3-3 org.pytorch.serve.wlm.ModelVersionedRefs - Adding new version 1.0 for model memory_error 2020-02-15 20:17:38,844 [DEBUG] epollEventLoopGroup-3-3 org.pytorch.serve.wlm.ModelVersionedRefs - Setting default version to 1.0 for model memory_error 2020-02-15 20:17:38,844 [INFO ] epollEventLoopGroup-3-3 org.pytorch.serve.wlm.ModelManager - Model memory_error loaded. 2020-02-15 20:17:38,845 [DEBUG] epollEventLoopGroup-3-3 org.pytorch.serve.wlm.ModelManager - updateModel: memory_error, count: 1 2020-02-15 20:17:38,911 [INFO ] W-9015-memory_error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Listening on port: /tmp/.ts.sock.9015 2020-02-15 20:17:38,911 [INFO ] W-9015-memory_error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - [PID]5974 2020-02-15 20:17:38,911 [INFO ] W-9015-memory_error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Torch worker started. 2020-02-15 20:17:38,911 [INFO ] W-9015-memory_error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Python runtime: 3.8.1 2020-02-15 20:17:38,911 [DEBUG] W-9015-memory_error_1.0 org.pytorch.serve.wlm.WorkerThread - W-9015-memory_error_1.0 State change null -> WORKER_STARTED 2020-02-15 20:17:38,911 [INFO ] W-9015-memory_error_1.0 org.pytorch.serve.wlm.WorkerThread - Connecting to: /tmp/.ts.sock.9015 2020-02-15 20:17:38,912 [INFO ] W-9015-memory_error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Connection accepted: /tmp/.ts.sock.9015. 2020-02-15 20:17:38,914 [INFO ] W-9015-memory_error_1.0 org.pytorch.serve.wlm.WorkerThread - Backend response time: 2 2020-02-15 20:17:38,914 [DEBUG] W-9015-memory_error_1.0 org.pytorch.serve.wlm.Job - Waiting time: 0, Inference time: 2 2020-02-15 20:17:38,914 [INFO ] W-9015-memory_error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Backend worker process die. 2020-02-15 20:17:38,914 [INFO ] W-9015-memory_error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Traceback (most recent call last): 2020-02-15 20:17:38,914 [INFO ] W-9015-memory_error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/tmp/pip-req-build-88c_hysm/ts/model_service_worker.py", line 163, in 2020-02-15 20:17:38,914 [INFO ] W-9015-memory_error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - worker.run_server() 2020-02-15 20:17:38,914 [INFO ] W-9015-memory_error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/tmp/pip-req-build-88c_hysm/ts/model_service_worker.py", line 141, in run_server 2020-02-15 20:17:38,914 [INFO ] W-9015-memory_error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - self.handle_connection(cl_socket) 2020-02-15 20:17:38,914 [INFO ] W-9015-memory_error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/tmp/pip-req-build-88c_hysm/ts/model_service_worker.py", line 110, in handle_connection 2020-02-15 20:17:38,914 [INFO ] W-9015-memory_error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - raise RuntimeError("{} - {}".format(code, result)) 2020-02-15 20:17:38,914 [INFO ] W-9015-memory_error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - RuntimeError: 507 - System out of memory 2020-02-15 20:17:38,914 [INFO ] epollEventLoopGroup-4-16 org.pytorch.serve.wlm.WorkerThread - 9015 Worker disconnected. WORKER_STARTED 2020-02-15 20:17:38,915 [INFO ] W-9015-memory_error_1.0 ACCESS_LOG - 0.0.0.0 "POST /models?url=loading-memory-error.mar&model_name=memory_error&runtime=python&initial_workers=1&synchronous=true HTTP/1.1" 507 71 2020-02-15 20:17:38,915 [INFO ] W-9015-memory_error_1.0 TS_METRICS - Requests5XX.Count:1|#Level:Host|#hostname:ip-172-31-17-115,timestamp:null 2020-02-15 20:17:38,915 [DEBUG] W-9015-memory_error_1.0 org.pytorch.serve.wlm.ModelVersionedRefs - Removed model: memory_error version: 1.0 2020-02-15 20:17:38,915 [DEBUG] W-9015-memory_error_1.0 org.pytorch.serve.wlm.WorkerThread - W-9015-memory_error_1.0 State change WORKER_STARTED -> WORKER_SCALED_DOWN 2020-02-15 20:17:38,915 [WARN ] W-9015-memory_error_1.0 org.pytorch.serve.wlm.WorkLoadManager - WorkerThread interrupted during waitFor, possible asynch resource cleanup. 2020-02-15 20:17:38,915 [DEBUG] W-9015-memory_error_1.0 org.pytorch.serve.wlm.ModelVersionedRefs - Adding new version 1.0 for model memory_error 2020-02-15 20:17:38,915 [DEBUG] W-9015-memory_error_1.0 org.pytorch.serve.wlm.ModelVersionedRefs - Setting default version to 1.0 for model memory_error 2020-02-15 20:17:38,915 [DEBUG] W-9015-memory_error_1.0 org.pytorch.serve.wlm.WorkerThread - W-9015-memory_error_1.0 State change WORKER_SCALED_DOWN -> WORKER_ERROR 2020-02-15 20:17:38,915 [DEBUG] W-9015-memory_error_1.0 org.pytorch.serve.wlm.WorkerThread - W-9015-memory_error_1.0 State change WORKER_SCALED_DOWN -> WORKER_STOPPED 2020-02-15 20:17:38,915 [DEBUG] W-9015-memory_error_1.0 org.pytorch.serve.wlm.WorkerThread - Worker terminated due to scale-down call. 2020-02-15 20:17:38,919 [DEBUG] epollEventLoopGroup-3-4 org.pytorch.serve.wlm.ModelVersionedRefs - Adding new version 1.0 for model pred-err 2020-02-15 20:17:38,919 [DEBUG] epollEventLoopGroup-3-4 org.pytorch.serve.wlm.ModelVersionedRefs - Setting default version to 1.0 for model pred-err 2020-02-15 20:17:38,919 [INFO ] epollEventLoopGroup-3-4 org.pytorch.serve.wlm.ModelManager - Model pred-err loaded. 2020-02-15 20:17:38,919 [DEBUG] epollEventLoopGroup-3-4 org.pytorch.serve.wlm.ModelManager - updateModel: pred-err, count: 1 2020-02-15 20:17:38,985 [INFO ] W-9016-pred-err_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Listening on port: /tmp/.ts.sock.9016 2020-02-15 20:17:38,985 [INFO ] W-9016-pred-err_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - [PID]5981 2020-02-15 20:17:38,985 [INFO ] W-9016-pred-err_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Torch worker started. 2020-02-15 20:17:38,985 [INFO ] W-9016-pred-err_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Python runtime: 3.8.1 2020-02-15 20:17:38,985 [DEBUG] W-9016-pred-err_1.0 org.pytorch.serve.wlm.WorkerThread - W-9016-pred-err_1.0 State change null -> WORKER_STARTED 2020-02-15 20:17:38,985 [INFO ] W-9016-pred-err_1.0 org.pytorch.serve.wlm.WorkerThread - Connecting to: /tmp/.ts.sock.9016 2020-02-15 20:17:38,986 [INFO ] W-9016-pred-err_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Connection accepted: /tmp/.ts.sock.9016. 2020-02-15 20:17:38,988 [INFO ] W-9016-pred-err_1.0 org.pytorch.serve.wlm.WorkerThread - Backend response time: 2 2020-02-15 20:17:38,988 [INFO ] W-9016-pred-err_1.0 ACCESS_LOG - 0.0.0.0 "POST /models?url=prediction-memory-error.mar&model_name=pred-err&runtime=python&initial_workers=1&synchronous=true HTTP/1.1" 200 70 2020-02-15 20:17:38,988 [INFO ] W-9016-pred-err_1.0 TS_METRICS - Requests2XX.Count:1|#Level:Host|#hostname:ip-172-31-17-115,timestamp:null 2020-02-15 20:17:38,988 [DEBUG] W-9016-pred-err_1.0 org.pytorch.serve.wlm.WorkerThread - W-9016-pred-err_1.0 State change WORKER_STARTED -> WORKER_MODEL_LOADED 2020-02-15 20:17:38,988 [INFO ] W-9016-pred-err_1.0 TS_METRICS - W-9016-pred-err_1.0.ms:69|#Level:Host|#hostname:ip-172-31-17-115,timestamp:1581797858 2020-02-15 20:17:39,010 [INFO ] W-9016-pred-err_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - System out of memory 2020-02-15 20:17:39,010 [INFO ] W-9016-pred-err_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Traceback (most recent call last): 2020-02-15 20:17:39,010 [INFO ] W-9016-pred-err_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/tmp/pip-req-build-88c_hysm/ts/service.py", line 100, in predict 2020-02-15 20:17:39,010 [INFO ] W-9016-pred-err_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - ret = self._entry_point(input_batch, self.context) 2020-02-15 20:17:39,010 [INFO ] W-9016-pred-err_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/tmp/models/e625c5ab6b0f6b74bf6a766e5cad578d6548cab8/service.py", line 9, in handle 2020-02-15 20:17:39,010 [INFO ] W-9016-pred-err_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - raise MemoryError("Some Memory Error") 2020-02-15 20:17:39,010 [INFO ] W-9016-pred-err_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - MemoryError: Some Memory Error 2020-02-15 20:17:39,010 [INFO ] W-9016-pred-err_1.0 org.pytorch.serve.wlm.WorkerThread - Backend response time: 1 2020-02-15 20:17:39,010 [INFO ] W-9016-pred-err_1.0 ACCESS_LOG - /127.0.0.1:37518 "POST /predictions/pred-err HTTP/1.1" 507 1 2020-02-15 20:17:39,010 [INFO ] W-9016-pred-err_1.0 TS_METRICS - Requests5XX.Count:1|#Level:Host|#hostname:ip-172-31-17-115,timestamp:null 2020-02-15 20:17:39,010 [DEBUG] W-9016-pred-err_1.0 org.pytorch.serve.wlm.Job - Waiting time: 0, Inference time: 1 2020-02-15 20:17:39,013 [DEBUG] epollEventLoopGroup-3-6 org.pytorch.serve.wlm.ModelVersionedRefs - Removed model: pred-err version: 1.0 2020-02-15 20:17:39,013 [DEBUG] epollEventLoopGroup-3-6 org.pytorch.serve.wlm.WorkerThread - W-9016-pred-err_1.0 State change WORKER_MODEL_LOADED -> WORKER_SCALED_DOWN 2020-02-15 20:17:39,013 [DEBUG] W-9016-pred-err_1.0 org.pytorch.serve.wlm.WorkerThread - Shutting down the thread .. Scaling down. 2020-02-15 20:17:39,013 [INFO ] epollEventLoopGroup-4-17 org.pytorch.serve.wlm.WorkerThread - 9016 Worker disconnected. WORKER_SCALED_DOWN 2020-02-15 20:17:39,013 [DEBUG] W-9016-pred-err_1.0 org.pytorch.serve.wlm.WorkerThread - W-9016-pred-err_1.0 State change WORKER_SCALED_DOWN -> WORKER_STOPPED 2020-02-15 20:17:39,013 [DEBUG] W-9016-pred-err_1.0 org.pytorch.serve.wlm.WorkerThread - Worker terminated due to scale-down call. 2020-02-15 20:17:39,015 [INFO ] epollEventLoopGroup-3-6 org.pytorch.serve.wlm.ModelManager - Model pred-err unregistered. 2020-02-15 20:17:39,015 [INFO ] epollEventLoopGroup-3-6 ACCESS_LOG - 0.0.0.0 "DELETE /models/pred-err HTTP/1.1" 200 2 2020-02-15 20:17:39,015 [INFO ] epollEventLoopGroup-3-6 TS_METRICS - Requests2XX.Count:1|#Level:Host|#hostname:ip-172-31-17-115,timestamp:null 2020-02-15 20:17:39,070 [INFO ] pool-2-thread-1 TS_METRICS - CPUUtilization.Percent:0.0|#Level:Host|#hostname:ip-172-31-17-115,timestamp:1581797859 2020-02-15 20:17:39,071 [INFO ] pool-2-thread-1 TS_METRICS - DiskAvailable.Gigabytes:38.769737243652344|#Level:Host|#hostname:ip-172-31-17-115,timestamp:1581797859 2020-02-15 20:17:39,071 [INFO ] pool-2-thread-1 TS_METRICS - DiskUsage.Gigabytes:77.48079299926758|#Level:Host|#hostname:ip-172-31-17-115,timestamp:1581797859 2020-02-15 20:17:39,071 [INFO ] pool-2-thread-1 TS_METRICS - DiskUtilization.Percent:66.6|#Level:Host|#hostname:ip-172-31-17-115,timestamp:1581797859 2020-02-15 20:17:39,071 [INFO ] pool-2-thread-1 TS_METRICS - MemoryAvailable.Megabytes:241537.10546875|#Level:Host|#hostname:ip-172-31-17-115,timestamp:1581797859 2020-02-15 20:17:39,071 [INFO ] pool-2-thread-1 TS_METRICS - MemoryUsed.Megabytes:2352.40625|#Level:Host|#hostname:ip-172-31-17-115,timestamp:1581797859 2020-02-15 20:17:39,071 [INFO ] pool-2-thread-1 TS_METRICS - MemoryUtilization.Percent:1.8|#Level:Host|#hostname:ip-172-31-17-115,timestamp:1581797859 2020-02-15 20:17:39,520 [DEBUG] epollEventLoopGroup-3-7 org.pytorch.serve.wlm.ModelVersionedRefs - Adding new version 1.0 for model err_batch 2020-02-15 20:17:39,520 [DEBUG] epollEventLoopGroup-3-7 org.pytorch.serve.wlm.ModelVersionedRefs - Setting default version to 1.0 for model err_batch 2020-02-15 20:17:39,520 [INFO ] epollEventLoopGroup-3-7 org.pytorch.serve.wlm.ModelManager - Model err_batch loaded. 2020-02-15 20:17:39,520 [DEBUG] epollEventLoopGroup-3-7 org.pytorch.serve.wlm.ModelManager - updateModel: err_batch, count: 1 2020-02-15 20:17:39,585 [INFO ] W-9017-err_batch_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Listening on port: /tmp/.ts.sock.9017 2020-02-15 20:17:39,586 [INFO ] W-9017-err_batch_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - [PID]5996 2020-02-15 20:17:39,586 [INFO ] W-9017-err_batch_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Torch worker started. 2020-02-15 20:17:39,586 [INFO ] W-9017-err_batch_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Python runtime: 3.8.1 2020-02-15 20:17:39,586 [DEBUG] W-9017-err_batch_1.0 org.pytorch.serve.wlm.WorkerThread - W-9017-err_batch_1.0 State change null -> WORKER_STARTED 2020-02-15 20:17:39,586 [INFO ] W-9017-err_batch_1.0 org.pytorch.serve.wlm.WorkerThread - Connecting to: /tmp/.ts.sock.9017 2020-02-15 20:17:39,587 [INFO ] W-9017-err_batch_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Connection accepted: /tmp/.ts.sock.9017. 2020-02-15 20:17:39,590 [INFO ] W-9017-err_batch_1.0 org.pytorch.serve.wlm.WorkerThread - Backend response time: 2 2020-02-15 20:17:39,590 [INFO ] W-9017-err_batch_1.0 ACCESS_LOG - 0.0.0.0 "POST /models?url=error_batch.mar&model_name=err_batch&initial_workers=1&synchronous=true HTTP/1.1" 200 71 2020-02-15 20:17:39,590 [INFO ] W-9017-err_batch_1.0 TS_METRICS - Requests2XX.Count:1|#Level:Host|#hostname:ip-172-31-17-115,timestamp:null 2020-02-15 20:17:39,591 [DEBUG] W-9017-err_batch_1.0 org.pytorch.serve.wlm.WorkerThread - W-9017-err_batch_1.0 State change WORKER_STARTED -> WORKER_MODEL_LOADED 2020-02-15 20:17:39,591 [INFO ] W-9017-err_batch_1.0 TS_METRICS - W-9017-err_batch_1.0.ms:71|#Level:Host|#hostname:ip-172-31-17-115,timestamp:1581797859 2020-02-15 20:17:39,611 [INFO ] W-9017-err_batch_1.0 org.pytorch.serve.wlm.WorkerThread - Backend response time: 1 2020-02-15 20:17:39,611 [INFO ] W-9017-err_batch_1.0-stdout MODEL_METRICS - PredictionTime.Milliseconds:0.01|#ModelName:err_batch,Level:Model|#hostname:ip-172-31-17-115,requestID:b94d2b79-67b7-430c-aeca-d1cc7a7d7202,timestamp:1581797859 2020-02-15 20:17:39,611 [INFO ] W-9017-err_batch_1.0 ACCESS_LOG - /127.0.0.1:37520 "POST /predictions/err_batch HTTP/1.1" 507 1 2020-02-15 20:17:39,611 [INFO ] W-9017-err_batch_1.0 TS_METRICS - Requests5XX.Count:1|#Level:Host|#hostname:ip-172-31-17-115,timestamp:null 2020-02-15 20:17:39,611 [DEBUG] W-9017-err_batch_1.0 org.pytorch.serve.wlm.Job - Waiting time: 0, Backend time: 1 2020-02-15 20:17:39,612 [ERROR] epollEventLoopGroup-14-1 org.pytorch.serve.ModelServerTest$TestHandler - Unknown exception io.netty.channel.unix.Errors$NativeIoException: syscall:read(..) failed: Connection reset by peer at io.netty.channel.unix.FileDescriptor.readAddress(..)(Unknown Source) 2020-02-15 20:17:39,631 [INFO ] epollEventLoopGroup-3-9 ACCESS_LOG - /127.0.0.1:37522 "GET / HTTP/1.1" 405 0 2020-02-15 20:17:39,631 [INFO ] epollEventLoopGroup-3-9 TS_METRICS - Requests4XX.Count:1|#Level:Host|#hostname:ip-172-31-17-115,timestamp:null 2020-02-15 20:17:39,648 [INFO ] epollEventLoopGroup-3-10 ACCESS_LOG - /127.0.0.1:37524 "GET /InvalidUrl HTTP/1.1" 404 0 2020-02-15 20:17:39,649 [INFO ] epollEventLoopGroup-3-10 TS_METRICS - Requests4XX.Count:1|#Level:Host|#hostname:ip-172-31-17-115,timestamp:null 2020-02-15 20:17:39,666 [INFO ] epollEventLoopGroup-3-11 ACCESS_LOG - /127.0.0.1:37526 "GET /predictions HTTP/1.1" 404 0 2020-02-15 20:17:39,667 [INFO ] epollEventLoopGroup-3-11 TS_METRICS - Requests4XX.Count:1|#Level:Host|#hostname:ip-172-31-17-115,timestamp:null 2020-02-15 20:17:39,683 [INFO ] epollEventLoopGroup-3-12 ACCESS_LOG - /127.0.0.1:37528 "OPTIONS /predictions/InvalidModel HTTP/1.1" 404 0 2020-02-15 20:17:39,683 [INFO ] epollEventLoopGroup-3-12 TS_METRICS - Requests4XX.Count:1|#Level:Host|#hostname:ip-172-31-17-115,timestamp:null 2020-02-15 20:17:39,700 [INFO ] epollEventLoopGroup-3-13 ACCESS_LOG - /127.0.0.1:37530 "GET /predictions/InvalidModel HTTP/1.1" 404 0 2020-02-15 20:17:39,700 [INFO ] epollEventLoopGroup-3-13 TS_METRICS - Requests4XX.Count:1|#Level:Host|#hostname:ip-172-31-17-115,timestamp:null 2020-02-15 20:17:39,704 [INFO ] epollEventLoopGroup-3-14 ACCESS_LOG - 0.0.0.0 "GET /InvalidUrl HTTP/1.1" 404 0 2020-02-15 20:17:39,705 [INFO ] epollEventLoopGroup-3-14 TS_METRICS - Requests4XX.Count:1|#Level:Host|#hostname:ip-172-31-17-115,timestamp:null 2020-02-15 20:17:39,708 [INFO ] epollEventLoopGroup-3-15 ACCESS_LOG - 0.0.0.0 "PUT /models HTTP/1.1" 405 0 2020-02-15 20:17:39,708 [INFO ] epollEventLoopGroup-3-15 TS_METRICS - Requests4XX.Count:1|#Level:Host|#hostname:ip-172-31-17-115,timestamp:null 2020-02-15 20:17:39,713 [INFO ] epollEventLoopGroup-3-16 ACCESS_LOG - 0.0.0.0 "POST /models/noop HTTP/1.1" 405 0 2020-02-15 20:17:39,713 [INFO ] epollEventLoopGroup-3-16 TS_METRICS - Requests4XX.Count:1|#Level:Host|#hostname:ip-172-31-17-115,timestamp:null 2020-02-15 20:17:39,717 [INFO ] epollEventLoopGroup-3-17 ACCESS_LOG - 0.0.0.0 "GET /models/InvalidModel HTTP/1.1" 404 0 2020-02-15 20:17:39,717 [INFO ] epollEventLoopGroup-3-17 TS_METRICS - Requests4XX.Count:1|#Level:Host|#hostname:ip-172-31-17-115,timestamp:null 2020-02-15 20:17:39,737 [INFO ] epollEventLoopGroup-3-18 ACCESS_LOG - 0.0.0.0 "POST /models HTTP/1.1" 400 1 2020-02-15 20:17:39,738 [INFO ] epollEventLoopGroup-3-18 TS_METRICS - Requests4XX.Count:1|#Level:Host|#hostname:ip-172-31-17-115,timestamp:null 2020-02-15 20:17:39,742 [INFO ] epollEventLoopGroup-3-19 ACCESS_LOG - 0.0.0.0 "POST /models?url=InvalidUrl&runtime=InvalidRuntime HTTP/1.1" 400 0 2020-02-15 20:17:39,742 [INFO ] epollEventLoopGroup-3-19 TS_METRICS - Requests4XX.Count:1|#Level:Host|#hostname:ip-172-31-17-115,timestamp:null 2020-02-15 20:17:39,746 [INFO ] epollEventLoopGroup-3-20 ACCESS_LOG - 0.0.0.0 "POST /models?url=InvalidUrl HTTP/1.1" 404 0 2020-02-15 20:17:39,747 [INFO ] epollEventLoopGroup-3-20 TS_METRICS - Requests4XX.Count:1|#Level:Host|#hostname:ip-172-31-17-115,timestamp:null 2020-02-15 20:17:39,751 [DEBUG] epollEventLoopGroup-3-21 org.pytorch.serve.wlm.ModelVersionedRefs - Adding new version 1.11 for model noop_v1.0 2020-02-15 20:17:39,752 [INFO ] epollEventLoopGroup-3-21 ACCESS_LOG - 0.0.0.0 "POST /models?url=noop.mar&model_name=noop_v1.0&runtime=python&synchronous=false HTTP/1.1" 409 2 2020-02-15 20:17:39,752 [INFO ] epollEventLoopGroup-3-21 TS_METRICS - Requests4XX.Count:1|#Level:Host|#hostname:ip-172-31-17-115,timestamp:null 2020-02-15 20:17:39,755 [INFO ] epollEventLoopGroup-3-22 ACCESS_LOG - 0.0.0.0 "POST /models?url=http%3A%2F%2Flocalhost%3Aaaaa HTTP/1.1" 404 0 2020-02-15 20:17:39,756 [INFO ] epollEventLoopGroup-3-22 TS_METRICS - Requests4XX.Count:1|#Level:Host|#hostname:ip-172-31-17-115,timestamp:null 2020-02-15 20:17:39,767 [INFO ] epollEventLoopGroup-3-23 ACCESS_LOG - 0.0.0.0 "POST /models?url=http%3A%2F%2Flocalhost%3A18888%2Ffake.mar&synchronous=false HTTP/1.1" 400 9 2020-02-15 20:17:39,767 [INFO ] epollEventLoopGroup-3-23 TS_METRICS - Requests4XX.Count:1|#Level:Host|#hostname:ip-172-31-17-115,timestamp:null 2020-02-15 20:17:39,799 [INFO ] epollEventLoopGroup-3-25 ACCESS_LOG - /127.0.0.1:37534 "GET /fake.mar HTTP/1.1" 404 0 2020-02-15 20:17:39,799 [INFO ] epollEventLoopGroup-3-25 TS_METRICS - Requests4XX.Count:1|#Level:Host|#hostname:ip-172-31-17-115,timestamp:null 2020-02-15 20:17:39,800 [INFO ] epollEventLoopGroup-3-24 ACCESS_LOG - 0.0.0.0 "POST /models?url=https%3A%2F%2Flocalhost%3A8443%2Ffake.mar&synchronous=false HTTP/1.1" 400 30 2020-02-15 20:17:39,800 [INFO ] epollEventLoopGroup-3-24 TS_METRICS - Requests4XX.Count:1|#Level:Host|#hostname:ip-172-31-17-115,timestamp:null 2020-02-15 20:17:39,805 [INFO ] epollEventLoopGroup-3-26 ACCESS_LOG - 0.0.0.0 "POST /models?url=..%2Ffake.mar&synchronous=false HTTP/1.1" 404 0 2020-02-15 20:17:39,806 [INFO ] epollEventLoopGroup-3-26 TS_METRICS - Requests4XX.Count:1|#Level:Host|#hostname:ip-172-31-17-115,timestamp:null 2020-02-15 20:17:39,809 [INFO ] epollEventLoopGroup-3-27 ACCESS_LOG - 0.0.0.0 "PUT /models/fake HTTP/1.1" 404 0 2020-02-15 20:17:39,809 [INFO ] epollEventLoopGroup-3-27 TS_METRICS - Requests4XX.Count:1|#Level:Host|#hostname:ip-172-31-17-115,timestamp:null 2020-02-15 20:17:39,814 [DEBUG] epollEventLoopGroup-3-28 org.pytorch.serve.wlm.ModelVersionedRefs - Adding new version 1.0 for model init-error 2020-02-15 20:17:39,814 [DEBUG] epollEventLoopGroup-3-28 org.pytorch.serve.wlm.ModelVersionedRefs - Setting default version to 1.0 for model init-error 2020-02-15 20:17:39,814 [INFO ] epollEventLoopGroup-3-28 org.pytorch.serve.wlm.ModelManager - Model init-error loaded. 2020-02-15 20:17:39,814 [INFO ] epollEventLoopGroup-3-28 ACCESS_LOG - 0.0.0.0 "POST /models?url=init-error.mar&model_name=init-error&synchronous=false HTTP/1.1" 200 2 2020-02-15 20:17:39,814 [INFO ] epollEventLoopGroup-3-28 TS_METRICS - Requests2XX.Count:1|#Level:Host|#hostname:ip-172-31-17-115,timestamp:null 2020-02-15 20:17:39,815 [DEBUG] epollEventLoopGroup-3-28 org.pytorch.serve.wlm.ModelManager - updateModel: init-error, count: 1 2020-02-15 20:17:39,882 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Listening on port: /tmp/.ts.sock.9018 2020-02-15 20:17:39,882 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - [PID]6044 2020-02-15 20:17:39,883 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Torch worker started. 2020-02-15 20:17:39,883 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Python runtime: 3.8.1 2020-02-15 20:17:39,883 [DEBUG] W-9018-init-error_1.0 org.pytorch.serve.wlm.WorkerThread - W-9018-init-error_1.0 State change null -> WORKER_STARTED 2020-02-15 20:17:39,883 [INFO ] W-9018-init-error_1.0 org.pytorch.serve.wlm.WorkerThread - Connecting to: /tmp/.ts.sock.9018 2020-02-15 20:17:39,884 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Connection accepted: /tmp/.ts.sock.9018. 2020-02-15 20:17:39,886 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Backend worker process die. 2020-02-15 20:17:39,886 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Traceback (most recent call last): 2020-02-15 20:17:39,886 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/tmp/pip-req-build-88c_hysm/ts/model_service_worker.py", line 163, in 2020-02-15 20:17:39,886 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - worker.run_server() 2020-02-15 20:17:39,886 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/tmp/pip-req-build-88c_hysm/ts/model_service_worker.py", line 141, in run_server 2020-02-15 20:17:39,886 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - self.handle_connection(cl_socket) 2020-02-15 20:17:39,886 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/tmp/pip-req-build-88c_hysm/ts/model_service_worker.py", line 105, in handle_connection 2020-02-15 20:17:39,887 [INFO ] epollEventLoopGroup-4-19 org.pytorch.serve.wlm.WorkerThread - 9018 Worker disconnected. WORKER_STARTED 2020-02-15 20:17:39,887 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - service, result, code = self.load_model(msg) 2020-02-15 20:17:39,887 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/tmp/pip-req-build-88c_hysm/ts/model_service_worker.py", line 83, in load_model 2020-02-15 20:17:39,887 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - service = model_loader.load(model_name, model_dir, handler, gpu, batch_size) 2020-02-15 20:17:39,887 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/tmp/pip-req-build-88c_hysm/ts/model_loader.py", line 107, in load 2020-02-15 20:17:39,887 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - entry_point(None, service.context) 2020-02-15 20:17:39,887 [DEBUG] W-9018-init-error_1.0 org.pytorch.serve.wlm.WorkerThread - Backend worker monitoring thread interrupted or backend worker process died. java.lang.InterruptedException at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2014) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2088) at java.util.concurrent.ArrayBlockingQueue.poll(ArrayBlockingQueue.java:418) at org.pytorch.serve.wlm.WorkerThread.run(WorkerThread.java:128) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) 2020-02-15 20:17:39,887 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/tmp/models/070632063adb276c032a224604952e4e910272ba/invalid_service.py", line 7, in handle 2020-02-15 20:17:39,888 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - raise RuntimeError("Initialize failure.") 2020-02-15 20:17:39,888 [WARN ] W-9018-init-error_1.0 org.pytorch.serve.wlm.BatchAggregator - Load model failed: init-error, error: Worker died. 2020-02-15 20:17:39,888 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - RuntimeError: Initialize failure. 2020-02-15 20:17:39,888 [INFO ] W-9018-init-error_1.0 ACCESS_LOG - 0.0.0.0 "PUT /models/init-error?synchronous=true&min_worker=1 HTTP/1.1" 500 73 2020-02-15 20:17:39,888 [INFO ] W-9018-init-error_1.0 TS_METRICS - Requests5XX.Count:1|#Level:Host|#hostname:ip-172-31-17-115,timestamp:null 2020-02-15 20:17:39,888 [DEBUG] W-9018-init-error_1.0 org.pytorch.serve.wlm.WorkerThread - W-9018-init-error_1.0 State change WORKER_STARTED -> WORKER_STOPPED 2020-02-15 20:17:39,891 [INFO ] W-9018-init-error_1.0 org.pytorch.serve.wlm.WorkerThread - Retry worker: 9018 in 1 seconds. 2020-02-15 20:17:39,893 [WARN ] epollEventLoopGroup-3-29 org.pytorch.serve.wlm.ModelManager - Model not found: fake 2020-02-15 20:17:39,893 [INFO ] epollEventLoopGroup-3-29 ACCESS_LOG - 0.0.0.0 "DELETE /models/fake HTTP/1.1" 404 0 2020-02-15 20:17:39,893 [INFO ] epollEventLoopGroup-3-29 TS_METRICS - Requests4XX.Count:1|#Level:Host|#hostname:ip-172-31-17-115,timestamp:null 2020-02-15 20:17:39,895 [DEBUG] epollEventLoopGroup-3-30 org.pytorch.serve.wlm.ModelVersionedRefs - Removed model: noop_v1.0 version: 1.11 2020-02-15 20:17:39,896 [DEBUG] epollEventLoopGroup-3-30 org.pytorch.serve.wlm.WorkerThread - W-9005-noop_v1.0_1.11 State change WORKER_MODEL_LOADED -> WORKER_SCALED_DOWN 2020-02-15 20:17:39,896 [WARN ] epollEventLoopGroup-3-30 org.pytorch.serve.wlm.WorkLoadManager - WorkerThread timed out while cleaning, please resend request. 2020-02-15 20:17:39,896 [DEBUG] W-9005-noop_v1.0_1.11 org.pytorch.serve.wlm.WorkerThread - Shutting down the thread .. Scaling down. 2020-02-15 20:17:39,896 [INFO ] epollEventLoopGroup-4-6 org.pytorch.serve.wlm.WorkerThread - 9005 Worker disconnected. WORKER_SCALED_DOWN 2020-02-15 20:17:39,896 [DEBUG] W-9005-noop_v1.0_1.11 org.pytorch.serve.wlm.WorkerThread - W-9005-noop_v1.0_1.11 State change WORKER_SCALED_DOWN -> WORKER_STOPPED 2020-02-15 20:17:39,896 [DEBUG] epollEventLoopGroup-3-30 org.pytorch.serve.wlm.ModelVersionedRefs - Adding new version 1.11 for model noop_v1.0 2020-02-15 20:17:39,896 [DEBUG] epollEventLoopGroup-3-30 org.pytorch.serve.wlm.ModelVersionedRefs - Setting default version to 1.11 for model noop_v1.0 2020-02-15 20:17:39,896 [DEBUG] W-9005-noop_v1.0_1.11 org.pytorch.serve.wlm.WorkerThread - Worker terminated due to scale-down call. 2020-02-15 20:17:39,896 [INFO ] epollEventLoopGroup-3-30 ACCESS_LOG - 0.0.0.0 "DELETE /models/noop_v1.0 HTTP/1.1" 408 1 2020-02-15 20:17:39,896 [INFO ] epollEventLoopGroup-3-30 TS_METRICS - Requests4XX.Count:1|#Level:Host|#hostname:ip-172-31-17-115,timestamp:null 2020-02-15 20:17:39,898 [DEBUG] epollEventLoopGroup-3-31 org.pytorch.serve.wlm.ModelVersionedRefs - Removed model: noop_v1.0 version: 1.11 2020-02-15 20:17:39,899 [INFO ] epollEventLoopGroup-3-31 org.pytorch.serve.wlm.ModelManager - Model noop_v1.0 unregistered. 2020-02-15 20:17:39,899 [INFO ] epollEventLoopGroup-3-31 ACCESS_LOG - 0.0.0.0 "DELETE /models/noop_v1.0 HTTP/1.1" 200 1 2020-02-15 20:17:39,899 [INFO ] epollEventLoopGroup-3-31 TS_METRICS - Requests2XX.Count:1|#Level:Host|#hostname:ip-172-31-17-115,timestamp:null 2020-02-15 20:17:40,958 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Listening on port: /tmp/.ts.sock.9018 2020-02-15 20:17:40,958 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - [PID]6056 2020-02-15 20:17:40,958 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Torch worker started. 2020-02-15 20:17:40,958 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Python runtime: 3.8.1 2020-02-15 20:17:40,958 [DEBUG] W-9018-init-error_1.0 org.pytorch.serve.wlm.WorkerThread - W-9018-init-error_1.0 State change WORKER_STOPPED -> WORKER_STARTED 2020-02-15 20:17:40,958 [INFO ] W-9018-init-error_1.0 org.pytorch.serve.wlm.WorkerThread - Connecting to: /tmp/.ts.sock.9018 2020-02-15 20:17:40,959 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Connection accepted: /tmp/.ts.sock.9018. 2020-02-15 20:17:40,960 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Backend worker process die. 2020-02-15 20:17:40,960 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Traceback (most recent call last): 2020-02-15 20:17:40,960 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/tmp/pip-req-build-88c_hysm/ts/model_service_worker.py", line 163, in 2020-02-15 20:17:40,960 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - worker.run_server() 2020-02-15 20:17:40,960 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/tmp/pip-req-build-88c_hysm/ts/model_service_worker.py", line 141, in run_server 2020-02-15 20:17:40,960 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - self.handle_connection(cl_socket) 2020-02-15 20:17:40,960 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/tmp/pip-req-build-88c_hysm/ts/model_service_worker.py", line 105, in handle_connection 2020-02-15 20:17:40,960 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - service, result, code = self.load_model(msg) 2020-02-15 20:17:40,960 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/tmp/pip-req-build-88c_hysm/ts/model_service_worker.py", line 83, in load_model 2020-02-15 20:17:40,961 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - service = model_loader.load(model_name, model_dir, handler, gpu, batch_size) 2020-02-15 20:17:40,961 [INFO ] epollEventLoopGroup-4-20 org.pytorch.serve.wlm.WorkerThread - 9018 Worker disconnected. WORKER_STARTED 2020-02-15 20:17:40,961 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/tmp/pip-req-build-88c_hysm/ts/model_loader.py", line 107, in load 2020-02-15 20:17:40,961 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - entry_point(None, service.context) 2020-02-15 20:17:40,961 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/tmp/models/070632063adb276c032a224604952e4e910272ba/invalid_service.py", line 7, in handle 2020-02-15 20:17:40,961 [DEBUG] W-9018-init-error_1.0 org.pytorch.serve.wlm.WorkerThread - Backend worker monitoring thread interrupted or backend worker process died. java.lang.InterruptedException at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2014) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2088) at java.util.concurrent.ArrayBlockingQueue.poll(ArrayBlockingQueue.java:418) at org.pytorch.serve.wlm.WorkerThread.run(WorkerThread.java:128) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) 2020-02-15 20:17:40,961 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - raise RuntimeError("Initialize failure.") 2020-02-15 20:17:40,961 [WARN ] W-9018-init-error_1.0 org.pytorch.serve.wlm.BatchAggregator - Load model failed: init-error, error: Worker died. 2020-02-15 20:17:40,961 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - RuntimeError: Initialize failure. 2020-02-15 20:17:40,962 [DEBUG] W-9018-init-error_1.0 org.pytorch.serve.wlm.WorkerThread - W-9018-init-error_1.0 State change WORKER_STARTED -> WORKER_STOPPED 2020-02-15 20:17:40,962 [INFO ] W-9018-init-error_1.0 org.pytorch.serve.wlm.WorkerThread - Retry worker: 9018 in 1 seconds. 2020-02-15 20:17:42,027 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Listening on port: /tmp/.ts.sock.9018 2020-02-15 20:17:42,027 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - [PID]6061 2020-02-15 20:17:42,027 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Torch worker started. 2020-02-15 20:17:42,027 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Python runtime: 3.8.1 2020-02-15 20:17:42,027 [DEBUG] W-9018-init-error_1.0 org.pytorch.serve.wlm.WorkerThread - W-9018-init-error_1.0 State change WORKER_STOPPED -> WORKER_STARTED 2020-02-15 20:17:42,027 [INFO ] W-9018-init-error_1.0 org.pytorch.serve.wlm.WorkerThread - Connecting to: /tmp/.ts.sock.9018 2020-02-15 20:17:42,028 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Connection accepted: /tmp/.ts.sock.9018. 2020-02-15 20:17:42,029 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Backend worker process die. 2020-02-15 20:17:42,029 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Traceback (most recent call last): 2020-02-15 20:17:42,029 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/tmp/pip-req-build-88c_hysm/ts/model_service_worker.py", line 163, in 2020-02-15 20:17:42,029 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - worker.run_server() 2020-02-15 20:17:42,029 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/tmp/pip-req-build-88c_hysm/ts/model_service_worker.py", line 141, in run_server 2020-02-15 20:17:42,029 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - self.handle_connection(cl_socket) 2020-02-15 20:17:42,029 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/tmp/pip-req-build-88c_hysm/ts/model_service_worker.py", line 105, in handle_connection 2020-02-15 20:17:42,029 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - service, result, code = self.load_model(msg) 2020-02-15 20:17:42,029 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/tmp/pip-req-build-88c_hysm/ts/model_service_worker.py", line 83, in load_model 2020-02-15 20:17:42,029 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - service = model_loader.load(model_name, model_dir, handler, gpu, batch_size) 2020-02-15 20:17:42,029 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/tmp/pip-req-build-88c_hysm/ts/model_loader.py", line 107, in load 2020-02-15 20:17:42,029 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - entry_point(None, service.context) 2020-02-15 20:17:42,029 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/tmp/models/070632063adb276c032a224604952e4e910272ba/invalid_service.py", line 7, in handle 2020-02-15 20:17:42,029 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - raise RuntimeError("Initialize failure.") 2020-02-15 20:17:42,029 [INFO ] epollEventLoopGroup-4-21 org.pytorch.serve.wlm.WorkerThread - 9018 Worker disconnected. WORKER_STARTED 2020-02-15 20:17:42,030 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - RuntimeError: Initialize failure. 2020-02-15 20:17:42,030 [DEBUG] W-9018-init-error_1.0 org.pytorch.serve.wlm.WorkerThread - Backend worker monitoring thread interrupted or backend worker process died. java.lang.InterruptedException at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2014) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2088) at java.util.concurrent.ArrayBlockingQueue.poll(ArrayBlockingQueue.java:418) at org.pytorch.serve.wlm.WorkerThread.run(WorkerThread.java:128) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) 2020-02-15 20:17:42,030 [WARN ] W-9018-init-error_1.0 org.pytorch.serve.wlm.BatchAggregator - Load model failed: init-error, error: Worker died. 2020-02-15 20:17:42,030 [DEBUG] W-9018-init-error_1.0 org.pytorch.serve.wlm.WorkerThread - W-9018-init-error_1.0 State change WORKER_STARTED -> WORKER_STOPPED 2020-02-15 20:17:42,033 [INFO ] W-9018-init-error_1.0 org.pytorch.serve.wlm.WorkerThread - Retry worker: 9018 in 2 seconds. 2020-02-15 20:17:44,095 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Listening on port: /tmp/.ts.sock.9018 2020-02-15 20:17:44,095 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - [PID]6066 2020-02-15 20:17:44,095 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Torch worker started. 2020-02-15 20:17:44,095 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Python runtime: 3.8.1 2020-02-15 20:17:44,095 [DEBUG] W-9018-init-error_1.0 org.pytorch.serve.wlm.WorkerThread - W-9018-init-error_1.0 State change WORKER_STOPPED -> WORKER_STARTED 2020-02-15 20:17:44,095 [INFO ] W-9018-init-error_1.0 org.pytorch.serve.wlm.WorkerThread - Connecting to: /tmp/.ts.sock.9018 2020-02-15 20:17:44,096 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Connection accepted: /tmp/.ts.sock.9018. 2020-02-15 20:17:44,097 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Backend worker process die. 2020-02-15 20:17:44,097 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Traceback (most recent call last): 2020-02-15 20:17:44,097 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/tmp/pip-req-build-88c_hysm/ts/model_service_worker.py", line 163, in 2020-02-15 20:17:44,097 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - worker.run_server() 2020-02-15 20:17:44,097 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/tmp/pip-req-build-88c_hysm/ts/model_service_worker.py", line 141, in run_server 2020-02-15 20:17:44,097 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - self.handle_connection(cl_socket) 2020-02-15 20:17:44,097 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/tmp/pip-req-build-88c_hysm/ts/model_service_worker.py", line 105, in handle_connection 2020-02-15 20:17:44,097 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - service, result, code = self.load_model(msg) 2020-02-15 20:17:44,097 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/tmp/pip-req-build-88c_hysm/ts/model_service_worker.py", line 83, in load_model 2020-02-15 20:17:44,097 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - service = model_loader.load(model_name, model_dir, handler, gpu, batch_size) 2020-02-15 20:17:44,097 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/tmp/pip-req-build-88c_hysm/ts/model_loader.py", line 107, in load 2020-02-15 20:17:44,097 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - entry_point(None, service.context) 2020-02-15 20:17:44,097 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/tmp/models/070632063adb276c032a224604952e4e910272ba/invalid_service.py", line 7, in handle 2020-02-15 20:17:44,098 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - raise RuntimeError("Initialize failure.") 2020-02-15 20:17:44,098 [INFO ] epollEventLoopGroup-4-22 org.pytorch.serve.wlm.WorkerThread - 9018 Worker disconnected. WORKER_STARTED 2020-02-15 20:17:44,098 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - RuntimeError: Initialize failure. 2020-02-15 20:17:44,098 [DEBUG] W-9018-init-error_1.0 org.pytorch.serve.wlm.WorkerThread - Backend worker monitoring thread interrupted or backend worker process died. java.lang.InterruptedException at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2014) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2088) at java.util.concurrent.ArrayBlockingQueue.poll(ArrayBlockingQueue.java:418) at org.pytorch.serve.wlm.WorkerThread.run(WorkerThread.java:128) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) 2020-02-15 20:17:44,098 [WARN ] W-9018-init-error_1.0 org.pytorch.serve.wlm.BatchAggregator - Load model failed: init-error, error: Worker died. 2020-02-15 20:17:44,098 [DEBUG] W-9018-init-error_1.0 org.pytorch.serve.wlm.WorkerThread - W-9018-init-error_1.0 State change WORKER_STARTED -> WORKER_STOPPED 2020-02-15 20:17:44,099 [INFO ] W-9018-init-error_1.0 org.pytorch.serve.wlm.WorkerThread - Retry worker: 9018 in 3 seconds. 2020-02-15 20:17:47,164 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Listening on port: /tmp/.ts.sock.9018 2020-02-15 20:17:47,164 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - [PID]6071 2020-02-15 20:17:47,164 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Torch worker started. 2020-02-15 20:17:47,164 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Python runtime: 3.8.1 2020-02-15 20:17:47,164 [DEBUG] W-9018-init-error_1.0 org.pytorch.serve.wlm.WorkerThread - W-9018-init-error_1.0 State change WORKER_STOPPED -> WORKER_STARTED 2020-02-15 20:17:47,164 [INFO ] W-9018-init-error_1.0 org.pytorch.serve.wlm.WorkerThread - Connecting to: /tmp/.ts.sock.9018 2020-02-15 20:17:47,165 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Connection accepted: /tmp/.ts.sock.9018. 2020-02-15 20:17:47,166 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Backend worker process die. 2020-02-15 20:17:47,166 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Traceback (most recent call last): 2020-02-15 20:17:47,166 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/tmp/pip-req-build-88c_hysm/ts/model_service_worker.py", line 163, in 2020-02-15 20:17:47,166 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - worker.run_server() 2020-02-15 20:17:47,166 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/tmp/pip-req-build-88c_hysm/ts/model_service_worker.py", line 141, in run_server 2020-02-15 20:17:47,166 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - self.handle_connection(cl_socket) 2020-02-15 20:17:47,166 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/tmp/pip-req-build-88c_hysm/ts/model_service_worker.py", line 105, in handle_connection 2020-02-15 20:17:47,166 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - service, result, code = self.load_model(msg) 2020-02-15 20:17:47,166 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/tmp/pip-req-build-88c_hysm/ts/model_service_worker.py", line 83, in load_model 2020-02-15 20:17:47,166 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - service = model_loader.load(model_name, model_dir, handler, gpu, batch_size) 2020-02-15 20:17:47,166 [INFO ] epollEventLoopGroup-4-23 org.pytorch.serve.wlm.WorkerThread - 9018 Worker disconnected. WORKER_STARTED 2020-02-15 20:17:47,167 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/tmp/pip-req-build-88c_hysm/ts/model_loader.py", line 107, in load 2020-02-15 20:17:47,167 [DEBUG] W-9018-init-error_1.0 org.pytorch.serve.wlm.WorkerThread - Backend worker monitoring thread interrupted or backend worker process died. java.lang.InterruptedException at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2014) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2088) at java.util.concurrent.ArrayBlockingQueue.poll(ArrayBlockingQueue.java:418) at org.pytorch.serve.wlm.WorkerThread.run(WorkerThread.java:128) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) 2020-02-15 20:17:47,167 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - entry_point(None, service.context) 2020-02-15 20:17:47,167 [WARN ] W-9018-init-error_1.0 org.pytorch.serve.wlm.BatchAggregator - Load model failed: init-error, error: Worker died. 2020-02-15 20:17:47,167 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/tmp/models/070632063adb276c032a224604952e4e910272ba/invalid_service.py", line 7, in handle 2020-02-15 20:17:47,167 [DEBUG] W-9018-init-error_1.0 org.pytorch.serve.wlm.WorkerThread - W-9018-init-error_1.0 State change WORKER_STARTED -> WORKER_STOPPED 2020-02-15 20:17:47,167 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - raise RuntimeError("Initialize failure.") 2020-02-15 20:17:47,167 [INFO ] W-9018-init-error_1.0 org.pytorch.serve.wlm.WorkerThread - Retry worker: 9018 in 5 seconds. 2020-02-15 20:17:47,170 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - RuntimeError: Initialize failure. 2020-02-15 20:17:52,233 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Listening on port: /tmp/.ts.sock.9018 2020-02-15 20:17:52,233 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - [PID]6076 2020-02-15 20:17:52,233 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Torch worker started. 2020-02-15 20:17:52,233 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Python runtime: 3.8.1 2020-02-15 20:17:52,233 [DEBUG] W-9018-init-error_1.0 org.pytorch.serve.wlm.WorkerThread - W-9018-init-error_1.0 State change WORKER_STOPPED -> WORKER_STARTED 2020-02-15 20:17:52,234 [INFO ] W-9018-init-error_1.0 org.pytorch.serve.wlm.WorkerThread - Connecting to: /tmp/.ts.sock.9018 2020-02-15 20:17:52,234 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Connection accepted: /tmp/.ts.sock.9018. 2020-02-15 20:17:52,235 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Backend worker process die. 2020-02-15 20:17:52,235 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Traceback (most recent call last): 2020-02-15 20:17:52,235 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/tmp/pip-req-build-88c_hysm/ts/model_service_worker.py", line 163, in 2020-02-15 20:17:52,235 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - worker.run_server() 2020-02-15 20:17:52,235 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/tmp/pip-req-build-88c_hysm/ts/model_service_worker.py", line 141, in run_server 2020-02-15 20:17:52,235 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - self.handle_connection(cl_socket) 2020-02-15 20:17:52,235 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/tmp/pip-req-build-88c_hysm/ts/model_service_worker.py", line 105, in handle_connection 2020-02-15 20:17:52,235 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - service, result, code = self.load_model(msg) 2020-02-15 20:17:52,235 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/tmp/pip-req-build-88c_hysm/ts/model_service_worker.py", line 83, in load_model 2020-02-15 20:17:52,235 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - service = model_loader.load(model_name, model_dir, handler, gpu, batch_size) 2020-02-15 20:17:52,235 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/tmp/pip-req-build-88c_hysm/ts/model_loader.py", line 107, in load 2020-02-15 20:17:52,236 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - entry_point(None, service.context) 2020-02-15 20:17:52,236 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/tmp/models/070632063adb276c032a224604952e4e910272ba/invalid_service.py", line 7, in handle 2020-02-15 20:17:52,236 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - raise RuntimeError("Initialize failure.") 2020-02-15 20:17:52,236 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - RuntimeError: Initialize failure. 2020-02-15 20:17:52,236 [INFO ] epollEventLoopGroup-4-24 org.pytorch.serve.wlm.WorkerThread - 9018 Worker disconnected. WORKER_STARTED 2020-02-15 20:17:52,236 [DEBUG] W-9018-init-error_1.0 org.pytorch.serve.wlm.WorkerThread - Backend worker monitoring thread interrupted or backend worker process died. java.lang.InterruptedException at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2014) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2088) at java.util.concurrent.ArrayBlockingQueue.poll(ArrayBlockingQueue.java:418) at org.pytorch.serve.wlm.WorkerThread.run(WorkerThread.java:128) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) 2020-02-15 20:17:52,236 [WARN ] W-9018-init-error_1.0 org.pytorch.serve.wlm.BatchAggregator - Load model failed: init-error, error: Worker died. 2020-02-15 20:17:52,236 [DEBUG] W-9018-init-error_1.0 org.pytorch.serve.wlm.WorkerThread - W-9018-init-error_1.0 State change WORKER_STARTED -> WORKER_STOPPED 2020-02-15 20:17:52,237 [INFO ] W-9018-init-error_1.0 org.pytorch.serve.wlm.WorkerThread - Retry worker: 9018 in 8 seconds. 2020-02-15 20:18:00,303 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Listening on port: /tmp/.ts.sock.9018 2020-02-15 20:18:00,303 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - [PID]6081 2020-02-15 20:18:00,303 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Torch worker started. 2020-02-15 20:18:00,303 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Python runtime: 3.8.1 2020-02-15 20:18:00,303 [DEBUG] W-9018-init-error_1.0 org.pytorch.serve.wlm.WorkerThread - W-9018-init-error_1.0 State change WORKER_STOPPED -> WORKER_STARTED 2020-02-15 20:18:00,303 [INFO ] W-9018-init-error_1.0 org.pytorch.serve.wlm.WorkerThread - Connecting to: /tmp/.ts.sock.9018 2020-02-15 20:18:00,305 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Connection accepted: /tmp/.ts.sock.9018. 2020-02-15 20:18:00,306 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Backend worker process die. 2020-02-15 20:18:00,306 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Traceback (most recent call last): 2020-02-15 20:18:00,306 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/tmp/pip-req-build-88c_hysm/ts/model_service_worker.py", line 163, in 2020-02-15 20:18:00,306 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - worker.run_server() 2020-02-15 20:18:00,306 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/tmp/pip-req-build-88c_hysm/ts/model_service_worker.py", line 141, in run_server 2020-02-15 20:18:00,306 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - self.handle_connection(cl_socket) 2020-02-15 20:18:00,306 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/tmp/pip-req-build-88c_hysm/ts/model_service_worker.py", line 105, in handle_connection 2020-02-15 20:18:00,306 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - service, result, code = self.load_model(msg) 2020-02-15 20:18:00,306 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/tmp/pip-req-build-88c_hysm/ts/model_service_worker.py", line 83, in load_model 2020-02-15 20:18:00,306 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - service = model_loader.load(model_name, model_dir, handler, gpu, batch_size) 2020-02-15 20:18:00,306 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/tmp/pip-req-build-88c_hysm/ts/model_loader.py", line 107, in load 2020-02-15 20:18:00,306 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - entry_point(None, service.context) 2020-02-15 20:18:00,306 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/tmp/models/070632063adb276c032a224604952e4e910272ba/invalid_service.py", line 7, in handle 2020-02-15 20:18:00,306 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - raise RuntimeError("Initialize failure.") 2020-02-15 20:18:00,306 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - RuntimeError: Initialize failure. 2020-02-15 20:18:00,307 [INFO ] epollEventLoopGroup-4-25 org.pytorch.serve.wlm.WorkerThread - 9018 Worker disconnected. WORKER_STARTED 2020-02-15 20:18:00,307 [DEBUG] W-9018-init-error_1.0 org.pytorch.serve.wlm.WorkerThread - Backend worker monitoring thread interrupted or backend worker process died. java.lang.InterruptedException at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2014) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2088) at java.util.concurrent.ArrayBlockingQueue.poll(ArrayBlockingQueue.java:418) at org.pytorch.serve.wlm.WorkerThread.run(WorkerThread.java:128) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) 2020-02-15 20:18:00,307 [WARN ] W-9018-init-error_1.0 org.pytorch.serve.wlm.BatchAggregator - Load model failed: init-error, error: Worker died. 2020-02-15 20:18:00,307 [DEBUG] W-9018-init-error_1.0 org.pytorch.serve.wlm.WorkerThread - W-9018-init-error_1.0 State change WORKER_STARTED -> WORKER_STOPPED 2020-02-15 20:18:00,307 [INFO ] W-9018-init-error_1.0 org.pytorch.serve.wlm.WorkerThread - Retry worker: 9018 in 13 seconds. 2020-02-15 20:18:09,017 [ERROR] epollEventLoopGroup-12-1 org.pytorch.serve.ModelServerTest$TestHandler - Unknown exception io.netty.handler.timeout.ReadTimeoutException 2020-02-15 20:18:09,900 [ERROR] epollEventLoopGroup-36-1 org.pytorch.serve.ModelServerTest$TestHandler - Unknown exception io.netty.handler.timeout.ReadTimeoutException 2020-02-15 20:18:09,903 [WARN ] epollEventLoopGroup-3-32 org.pytorch.serve.wlm.ModelManager - Cannot remove default version 1.11 for model noopversioned 2020-02-15 20:18:09,903 [ERROR] epollEventLoopGroup-3-32 org.pytorch.serve.http.HttpRequestHandler - org.pytorch.serve.http.InternalServerException: Cannot remove default version for model noopversioned at org.pytorch.serve.http.ManagementRequestHandler.handleUnregisterModel(ManagementRequestHandler.java:272) at org.pytorch.serve.http.ManagementRequestHandler.handleRequest(ManagementRequestHandler.java:84) at org.pytorch.serve.http.ApiDescriptionRequestHandler.handleRequest(ApiDescriptionRequestHandler.java:37) at org.pytorch.serve.http.HttpRequestHandler.channelRead0(HttpRequestHandler.java:42) at org.pytorch.serve.http.HttpRequestHandler.channelRead0(HttpRequestHandler.java:19) at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) at io.netty.channel.CombinedChannelDuplexHandler$DelegatingChannelHandlerContext.fireChannelRead(CombinedChannelDuplexHandler.java:438) at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:310) at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:284) at io.netty.channel.CombinedChannelDuplexHandler.channelRead(CombinedChannelDuplexHandler.java:253) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1434) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:965) at io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.epollInReady(AbstractEpollStreamChannel.java:808) at io.netty.channel.epoll.EpollDomainSocketChannel$EpollDomainUnsafe.epollInReady(EpollDomainSocketChannel.java:138) at io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:404) at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:304) at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:884) at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:748) 2020-02-15 20:18:09,905 [INFO ] epollEventLoopGroup-3-32 ACCESS_LOG - 0.0.0.0 "DELETE /models/noopversioned/1.11 HTTP/1.1" 500 2 2020-02-15 20:18:09,905 [INFO ] epollEventLoopGroup-3-32 TS_METRICS - Requests5XX.Count:1|#Level:Host|#hostname:ip-172-31-17-115,timestamp:null 2020-02-15 20:18:09,907 [DEBUG] epollEventLoopGroup-3-33 org.pytorch.serve.wlm.ModelVersionedRefs - Removed model: noopversioned version: 1.21 2020-02-15 20:18:09,908 [DEBUG] epollEventLoopGroup-3-33 org.pytorch.serve.wlm.WorkerThread - W-9008-noopversioned_1.21 State change WORKER_MODEL_LOADED -> WORKER_SCALED_DOWN 2020-02-15 20:18:09,908 [DEBUG] W-9008-noopversioned_1.21 org.pytorch.serve.wlm.WorkerThread - Shutting down the thread .. Scaling down. 2020-02-15 20:18:09,908 [INFO ] epollEventLoopGroup-4-9 org.pytorch.serve.wlm.WorkerThread - 9008 Worker disconnected. WORKER_SCALED_DOWN 2020-02-15 20:18:09,908 [DEBUG] W-9008-noopversioned_1.21 org.pytorch.serve.wlm.WorkerThread - W-9008-noopversioned_1.21 State change WORKER_SCALED_DOWN -> WORKER_STOPPED 2020-02-15 20:18:09,908 [DEBUG] W-9008-noopversioned_1.21 org.pytorch.serve.wlm.WorkerThread - Worker terminated due to scale-down call. 2020-02-15 20:18:09,909 [INFO ] epollEventLoopGroup-3-33 org.pytorch.serve.wlm.ModelManager - Model noopversioned unregistered. 2020-02-15 20:18:09,910 [INFO ] epollEventLoopGroup-3-33 ACCESS_LOG - 0.0.0.0 "DELETE /models/noopversioned/1.21 HTTP/1.1" 200 3 2020-02-15 20:18:09,910 [INFO ] epollEventLoopGroup-3-33 TS_METRICS - Requests2XX.Count:1|#Level:Host|#hostname:ip-172-31-17-115,timestamp:null 2020-02-15 20:18:09,910 [DEBUG] epollEventLoopGroup-3-33 org.pytorch.serve.wlm.ModelVersionedRefs - Removed model: noopversioned version: 1.11 2020-02-15 20:18:09,910 [DEBUG] epollEventLoopGroup-3-33 org.pytorch.serve.wlm.WorkerThread - W-9007-noopversioned_1.11 State change WORKER_MODEL_LOADED -> WORKER_SCALED_DOWN 2020-02-15 20:18:09,911 [DEBUG] W-9007-noopversioned_1.11 org.pytorch.serve.wlm.WorkerThread - Shutting down the thread .. Scaling down. 2020-02-15 20:18:09,911 [INFO ] epollEventLoopGroup-4-8 org.pytorch.serve.wlm.WorkerThread - 9007 Worker disconnected. WORKER_SCALED_DOWN 2020-02-15 20:18:09,911 [DEBUG] W-9007-noopversioned_1.11 org.pytorch.serve.wlm.WorkerThread - W-9007-noopversioned_1.11 State change WORKER_SCALED_DOWN -> WORKER_STOPPED 2020-02-15 20:18:09,911 [DEBUG] W-9007-noopversioned_1.11 org.pytorch.serve.wlm.WorkerThread - Worker terminated due to scale-down call. 2020-02-15 20:18:09,912 [INFO ] epollEventLoopGroup-3-33 org.pytorch.serve.wlm.ModelManager - Model noopversioned unregistered. 2020-02-15 20:18:09,912 [INFO ] epollEventLoopGroup-3-33 ACCESS_LOG - 0.0.0.0 "DELETE /models/noopversioned/1.11 HTTP/1.1" 200 2 2020-02-15 20:18:09,912 [INFO ] epollEventLoopGroup-3-33 TS_METRICS - Requests2XX.Count:1|#Level:Host|#hostname:ip-172-31-17-115,timestamp:null Gradle suite > Gradle test > org.pytorch.serve.ModelServerTest.test PASSED Gradle suite > Gradle test > org.pytorch.serve.ModelServerTest.testTS STANDARD_OUT 2020-02-15 20:18:10,001 [DEBUG] epollEventLoopGroup-3-35 org.pytorch.serve.wlm.ModelVersionedRefs - Adding new version 1.0 for model mnist 2020-02-15 20:18:10,002 [DEBUG] epollEventLoopGroup-3-35 org.pytorch.serve.wlm.ModelVersionedRefs - Setting default version to 1.0 for model mnist 2020-02-15 20:18:10,002 [INFO ] epollEventLoopGroup-3-35 org.pytorch.serve.wlm.ModelManager - Model mnist loaded. 2020-02-15 20:18:10,002 [DEBUG] epollEventLoopGroup-3-35 org.pytorch.serve.wlm.ModelManager - updateModel: mnist, count: 1 2020-02-15 20:18:10,069 [INFO ] W-9019-mnist_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Listening on port: /tmp/.ts.sock.9019 2020-02-15 20:18:10,069 [INFO ] W-9019-mnist_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - [PID]6094 2020-02-15 20:18:10,069 [INFO ] W-9019-mnist_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Torch worker started. 2020-02-15 20:18:10,069 [INFO ] W-9019-mnist_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Python runtime: 3.8.1 2020-02-15 20:18:10,069 [DEBUG] W-9019-mnist_1.0 org.pytorch.serve.wlm.WorkerThread - W-9019-mnist_1.0 State change null -> WORKER_STARTED 2020-02-15 20:18:10,069 [INFO ] W-9019-mnist_1.0 org.pytorch.serve.wlm.WorkerThread - Connecting to: /tmp/.ts.sock.9019 2020-02-15 20:18:10,070 [INFO ] W-9019-mnist_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Connection accepted: /tmp/.ts.sock.9019. 2020-02-15 20:18:13,374 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Listening on port: /tmp/.ts.sock.9018 2020-02-15 20:18:13,375 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - [PID]6112 2020-02-15 20:18:13,375 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Torch worker started. 2020-02-15 20:18:13,375 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Python runtime: 3.8.1 2020-02-15 20:18:13,375 [DEBUG] W-9018-init-error_1.0 org.pytorch.serve.wlm.WorkerThread - W-9018-init-error_1.0 State change WORKER_STOPPED -> WORKER_STARTED 2020-02-15 20:18:13,375 [INFO ] W-9018-init-error_1.0 org.pytorch.serve.wlm.WorkerThread - Connecting to: /tmp/.ts.sock.9018 2020-02-15 20:18:13,376 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Connection accepted: /tmp/.ts.sock.9018. 2020-02-15 20:18:13,377 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Backend worker process die. 2020-02-15 20:18:13,377 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Traceback (most recent call last): 2020-02-15 20:18:13,377 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/tmp/pip-req-build-88c_hysm/ts/model_service_worker.py", line 163, in 2020-02-15 20:18:13,377 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - worker.run_server() 2020-02-15 20:18:13,377 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/tmp/pip-req-build-88c_hysm/ts/model_service_worker.py", line 141, in run_server 2020-02-15 20:18:13,377 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - self.handle_connection(cl_socket) 2020-02-15 20:18:13,377 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/tmp/pip-req-build-88c_hysm/ts/model_service_worker.py", line 105, in handle_connection 2020-02-15 20:18:13,377 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - service, result, code = self.load_model(msg) 2020-02-15 20:18:13,377 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/tmp/pip-req-build-88c_hysm/ts/model_service_worker.py", line 83, in load_model 2020-02-15 20:18:13,377 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - service = model_loader.load(model_name, model_dir, handler, gpu, batch_size) 2020-02-15 20:18:13,377 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/tmp/pip-req-build-88c_hysm/ts/model_loader.py", line 107, in load 2020-02-15 20:18:13,377 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - entry_point(None, service.context) 2020-02-15 20:18:13,377 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/tmp/models/070632063adb276c032a224604952e4e910272ba/invalid_service.py", line 7, in handle 2020-02-15 20:18:13,377 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - raise RuntimeError("Initialize failure.") 2020-02-15 20:18:13,377 [INFO ] W-9018-init-error_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - RuntimeError: Initialize failure. 2020-02-15 20:18:13,378 [INFO ] epollEventLoopGroup-4-27 org.pytorch.serve.wlm.WorkerThread - 9018 Worker disconnected. WORKER_STARTED 2020-02-15 20:18:13,378 [DEBUG] W-9018-init-error_1.0 org.pytorch.serve.wlm.WorkerThread - Backend worker monitoring thread interrupted or backend worker process died. java.lang.InterruptedException at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2014) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2088) at java.util.concurrent.ArrayBlockingQueue.poll(ArrayBlockingQueue.java:418) at org.pytorch.serve.wlm.WorkerThread.run(WorkerThread.java:128) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) 2020-02-15 20:18:13,378 [WARN ] W-9018-init-error_1.0 org.pytorch.serve.wlm.BatchAggregator - Load model failed: init-error, error: Worker died. 2020-02-15 20:18:13,378 [DEBUG] W-9018-init-error_1.0 org.pytorch.serve.wlm.WorkerThread - W-9018-init-error_1.0 State change WORKER_STARTED -> WORKER_STOPPED 2020-02-15 20:18:13,378 [INFO ] W-9018-init-error_1.0 org.pytorch.serve.wlm.WorkerThread - Retry worker: 9018 in 21 seconds. 2020-02-15 20:18:14,164 [INFO ] W-9019-mnist_1.0 org.pytorch.serve.wlm.WorkerThread - Backend response time: 4094 2020-02-15 20:18:14,165 [INFO ] W-9019-mnist_1.0 ACCESS_LOG - 0.0.0.0 "POST /models?url=mnist.mar&model_name=mnist&runtime=python&initial_workers=1&synchronous=true HTTP/1.1" 200 4249 2020-02-15 20:18:14,165 [INFO ] W-9019-mnist_1.0 TS_METRICS - Requests2XX.Count:1|#Level:Host|#hostname:ip-172-31-17-115,timestamp:null 2020-02-15 20:18:14,165 [DEBUG] W-9019-mnist_1.0 org.pytorch.serve.wlm.WorkerThread - W-9019-mnist_1.0 State change WORKER_STARTED -> WORKER_MODEL_LOADED 2020-02-15 20:18:14,165 [INFO ] W-9019-mnist_1.0 TS_METRICS - W-9019-mnist_1.0.ms:4163|#Level:Host|#hostname:ip-172-31-17-115,timestamp:1581797894 2020-02-15 20:18:14,203 [INFO ] W-9019-mnist_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Invoking custom service failed. 2020-02-15 20:18:14,203 [INFO ] W-9019-mnist_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Traceback (most recent call last): 2020-02-15 20:18:14,203 [INFO ] W-9019-mnist_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/tmp/pip-req-build-88c_hysm/ts/service.py", line 100, in predict 2020-02-15 20:18:14,203 [INFO ] W-9019-mnist_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - ret = self._entry_point(input_batch, self.context) 2020-02-15 20:18:14,203 [INFO ] W-9019-mnist_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/tmp/models/235a35d7b420d6b7954e88cce0772caf1f76942f/mnist_handler.py", line 92, in handle 2020-02-15 20:18:14,203 [INFO ] W-9019-mnist_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - data = _service.inference(data) 2020-02-15 20:18:14,203 [INFO ] W-9019-mnist_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/tmp/models/235a35d7b420d6b7954e88cce0772caf1f76942f/mnist_handler.py", line 71, in inference 2020-02-15 20:18:14,203 [INFO ] W-9019-mnist_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - outputs = self.model.forward(inputs) 2020-02-15 20:18:14,203 [INFO ] W-9019-mnist_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/tmp/models/235a35d7b420d6b7954e88cce0772caf1f76942f/mnist.py", line 17, in forward 2020-02-15 20:18:14,203 [INFO ] W-9019-mnist_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - x = self.conv1(x) 2020-02-15 20:18:14,203 [INFO ] W-9019-mnist_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/home/ubuntu/anaconda3/envs/serve/lib/python3.8/site-packages/torch/nn/modules/module.py", line 532, in __call__ 2020-02-15 20:18:14,203 [INFO ] W-9019-mnist_1.0 org.pytorch.serve.wlm.WorkerThread - Backend response time: 36 2020-02-15 20:18:14,203 [INFO ] W-9019-mnist_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - result = self.forward(*input, **kwargs) 2020-02-15 20:18:14,203 [INFO ] W-9019-mnist_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/home/ubuntu/anaconda3/envs/serve/lib/python3.8/site-packages/torch/nn/modules/conv.py", line 345, in forward 2020-02-15 20:18:14,203 [INFO ] W-9019-mnist_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - return self.conv2d_forward(input, self.weight) 2020-02-15 20:18:14,203 [INFO ] W-9019-mnist_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/home/ubuntu/anaconda3/envs/serve/lib/python3.8/site-packages/torch/nn/modules/conv.py", line 341, in conv2d_forward 2020-02-15 20:18:14,203 [INFO ] W-9019-mnist_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - return F.conv2d(input, weight, self.bias, self.stride, 2020-02-15 20:18:14,203 [INFO ] W-9019-mnist_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - RuntimeError: Input type (torch.cuda.FloatTensor) and weight type (torch.FloatTensor) should be the same 2020-02-15 20:18:14,203 [INFO ] W-9019-mnist_1.0 ACCESS_LOG - /127.0.0.1:37536 "POST /predictions/mnist HTTP/1.1" 503 37 2020-02-15 20:18:14,203 [INFO ] W-9019-mnist_1.0 TS_METRICS - Requests5XX.Count:1|#Level:Host|#hostname:ip-172-31-17-115,timestamp:null 2020-02-15 20:18:14,203 [DEBUG] W-9019-mnist_1.0 org.pytorch.serve.wlm.Job - Waiting time: 0, Inference time: 37 2020-02-15 20:18:14,204 [ERROR] epollEventLoopGroup-39-1 org.pytorch.serve.ModelServerTest$TestHandler - Unknown exception io.netty.channel.unix.Errors$NativeIoException: syscall:read(..) failed: Connection reset by peer at io.netty.channel.unix.FileDescriptor.readAddress(..)(Unknown Source) Gradle suite > Gradle test > org.pytorch.serve.ModelServerTest.testTS FAILED java.lang.AssertionError at ModelServerTest.java:508 Gradle suite STANDARD_OUT 2020-02-15 20:18:14,208 [INFO ] epollEventLoopGroup-2-1 org.pytorch.serve.ModelServer - Inference model server stopped. 2020-02-15 20:18:14,208 [INFO ] epollEventLoopGroup-2-2 org.pytorch.serve.ModelServer - Management model server stopped. 5 tests completed, 1 failed > Task :server:test FAILED > Task :server:jacocoTestReport FAILURE: Build failed with an exception. * What went wrong: Execution failed for task ':server:test'. > There were failing tests. See the report at: file:///tmp/pip-req-build-88c_hysm/frontend/server/build/reports/tests/test/index.html * Try: Run with --stacktrace option to get the stack trace. Run with --info or --debug option to get more log output. Run with --scan to get full insights. * Get more help at https://help.gradle.org BUILD FAILED in 1m 7s 30 actionable tasks: 30 executed Traceback (most recent call last): File "", line 1, in File "/tmp/pip-req-build-88c_hysm/setup.py", line 137, in setup( File "/home/ubuntu/anaconda3/envs/serve/lib/python3.8/site-packages/setuptools/__init__.py", line 144, in setup return distutils.core.setup(**attrs) File "/home/ubuntu/anaconda3/envs/serve/lib/python3.8/distutils/core.py", line 148, in setup dist.run_commands() File "/home/ubuntu/anaconda3/envs/serve/lib/python3.8/distutils/dist.py", line 966, in run_commands self.run_command(cmd) File "/home/ubuntu/anaconda3/envs/serve/lib/python3.8/distutils/dist.py", line 985, in run_command cmd_obj.run() File "/home/ubuntu/anaconda3/envs/serve/lib/python3.8/site-packages/setuptools/command/install.py", line 61, in run return orig.install.run(self) File "/home/ubuntu/anaconda3/envs/serve/lib/python3.8/distutils/command/install.py", line 545, in run self.run_command('build') File "/home/ubuntu/anaconda3/envs/serve/lib/python3.8/distutils/cmd.py", line 313, in run_command self.distribution.run_command(command) File "/home/ubuntu/anaconda3/envs/serve/lib/python3.8/distutils/dist.py", line 985, in run_command cmd_obj.run() File "/home/ubuntu/anaconda3/envs/serve/lib/python3.8/distutils/command/build.py", line 135, in run self.run_command(cmd_name) File "/home/ubuntu/anaconda3/envs/serve/lib/python3.8/distutils/cmd.py", line 313, in run_command self.distribution.run_command(command) File "/home/ubuntu/anaconda3/envs/serve/lib/python3.8/distutils/dist.py", line 985, in run_command cmd_obj.run() File "/tmp/pip-req-build-88c_hysm/setup.py", line 98, in run self.run_command('build_frontend') File "/home/ubuntu/anaconda3/envs/serve/lib/python3.8/distutils/cmd.py", line 313, in run_command self.distribution.run_command(command) File "/home/ubuntu/anaconda3/envs/serve/lib/python3.8/distutils/dist.py", line 985, in run_command cmd_obj.run() File "/tmp/pip-req-build-88c_hysm/setup.py", line 85, in run subprocess.check_call('frontend/gradlew -p frontend clean build', shell=True) File "/home/ubuntu/anaconda3/envs/serve/lib/python3.8/subprocess.py", line 364, in check_call raise CalledProcessError(retcode, cmd) subprocess.CalledProcessError: Command 'frontend/gradlew -p frontend clean build' returned non-zero exit status 1. ---------------------------------------- ERROR: Command errored out with exit status 1: /home/ubuntu/anaconda3/envs/serve/bin/python -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-req-build-88c_hysm/setup.py'"'"'; __file__='"'"'/tmp/pip-req-build-88c_hysm/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record /tmp/pip-record-wtp_gdlz/install-record.txt --single-version-externally-managed --compile --install-headers /home/ubuntu/anaconda3/envs/serve/include/python3.8/torchserve Check the logs for full command output.