Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[analytics] Made suppression fixes #324

Merged
merged 2 commits into from
Jun 1, 2023
Merged

[analytics] Made suppression fixes #324

merged 2 commits into from
Jun 1, 2023

Conversation

rahul-tuli
Copy link
Member

@rahul-tuli rahul-tuli commented Jun 1, 2023

This PR makes the requested suppression fixes, the work is two fold:

  • Disable all logs from request and urllib3 loggers: Added a context manager for that
  • Disable all stderr from is_gdpr_country: Accomplished using contextlib.redirect_stderr()

After this change me and @KSGulin verified locally that output was still produced by utilities such as deepsparse.check_hardware:

deepsparse.check_hardware 
/home/rahul/.venvs/sparsezoo/lib/python3.10/site-packages/requests/__init__.py:102: RequestsDependencyWarning: urllib3 (1.26.16) or chardet (5.1.0)/charset_normalizer (2.0.12) doesn't match a supported version!
  warnings.warn("urllib3 ({}) or chardet ({})/charset_normalizer ({}) doesn't match a supported "
GenuineIntel CPU detected with 10 cores. (1 sockets with 10 cores each)
DeepSparse FP32 model performance supported: True.
DeepSparse INT8 (quantized) model performance supported: TRUE (emulated).

Non VNNI system detected. Performance speedups for INT8 (quantized) models is available, but will be slower compared with a VNNI system. Set NM_FAST_VNNI_EMULATION=True in the environment to enable faster emulated inference which may have a minor effect on accuracy.

Additional CPU info: {'L1_data_cache_size': 32768, 'L1_instruction_cache_size': 32768, 'L2_cache_size': 1048576, 'L3_cache_size': 14417920, 'architecture': 'x86_64', 'available_cores_per_socket': 10, 'available_num_cores': 10, 'available_num_hw_threads': 20, 'available_num_numa': 1, 'available_num_sockets': 1, 'available_sockets': 1, 'available_threads_per_core': 2, 'bf16': False, 'cores_per_socket': 10, 'dotprod': False, 'i8mm': False, 'isa': 'avx512', 'num_cores': 10, 'num_hw_threads': 20, 'num_numa': 1, 'num_sockets': 1, 'threads_per_core': 2, 'vbmi': False, 'vbmi2': False, 'vendor': 'GenuineIntel', 'vendor_id': 'Intel', 'vendor_model': 'Intel(R) Core(TM) i9-7900X CPU @ 3.30GHz', 'vnni': False, 'zen1': False}

And also that integration tests for deepsparse are green ✅

(sparsezoo) 🥃 deepsparse (main) 💍 make test_integrations 
Running package integrations tests
=========================================================================== test session starts ===========================================================================
platform linux -- Python 3.10.11, pytest-7.3.1, pluggy-1.0.0
rootdir: /home/rahul/github_projects/deepsparse
configfile: pyproject.toml
plugins: flaky-3.7.0, anyio-3.7.0
collected 2 items                                                                                                                                                         

integrations/haystack/tests/test_smoke.py ..                                                                                                                        [100%]

Noting: It will take time for local-install-checks here in deepsparse to be green as they install sparsezoo nightly directly from pypi

Also ran the following to verify:

from multiprocessing import Pool
import subprocess


def task(*args, **kwargs):
    return subprocess.check_output(["deepsparse.check_hardware"])

with Pool(100) as p:
    p.map(task, list(range(1000)))

out = task()
print(out)

Output:

b"GenuineIntel CPU detected with 10 cores. (1 sockets with 10 cores each)\nDeepSparse FP32 model performance supported: True.\nDeepSparse INT8 (quantized) model performance supported: TRUE (emulated).\n\nNon VNNI system detected. Performance speedups for INT8 (quantized) models is available, but will be slower compared with a VNNI system. Set NM_FAST_VNNI_EMULATION=True in the environment to enable faster emulated inference which may have a minor effect on accuracy.\n\nAdditional CPU info: {'L1_data_cache_size': 32768, 'L1_instruction_cache_size': 32768, 'L2_cache_size': 1048576, 'L3_cache_size': 14417920, 'architecture': 'x86_64', 'available_cores_per_socket': 10, 'available_num_cores': 10, 'available_num_hw_threads': 20, 'available_num_numa': 1, 'available_num_sockets': 1, 'available_sockets': 1, 'available_threads_per_core': 2, 'bf16': False, 'cores_per_socket': 10, 'dotprod': False, 'i8mm': False, 'isa': 'avx512', 'num_cores': 10, 'num_hw_threads': 20, 'num_numa': 1, 'num_sockets': 1, 'threads_per_core': 2, 'vbmi': False, 'vbmi2': False, 'vendor': 'GenuineIntel', 'vendor_id': 'Intel', 'vendor_model': 'Intel(R) Core(TM) i9-7900X CPU @ 3.30GHz', 'vnni': False, 'zen1': False}\n"

@markurtz markurtz merged commit 960b213 into main Jun 1, 2023
4 checks passed
@markurtz markurtz deleted the client-error-fixes branch June 1, 2023 16:53
rahul-tuli added a commit that referenced this pull request Jun 1, 2023
* Made suppression fixes

* Suppress request logs while fetching external ip

(cherry picked from commit 960b213)
markurtz pushed a commit that referenced this pull request Jun 1, 2023
* Made suppression fixes

* Suppress request logs while fetching external ip

(cherry picked from commit 960b213)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants