Skip to content

Commit 4612be8

Browse files
committed
Updated conda packs; changed 'developed on' to 'compatible with'
1 parent 4ed119d commit 4612be8

30 files changed

+72
-73
lines changed

notebook_examples/accelerate-scikit_learn-with-intel_extension.ipynb

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -46,7 +46,7 @@
4646
"\n",
4747
"This notebook demonstrates an easy way to enhance performance of scikit-learn models using Intel provided Python accelerators. Acceleration is achieved by using the Intel(R) oneAPI Data Analytics Library (oneDAL) that allows fast use of the framework suited for Data Scientists or Machine Learning users. The Intel Extension for Scikit-learn was created to give data scientists the easiest way to get better performance while using the familiar `scikit-learn` package.\n",
4848
"\n",
49-
"Developed on [Intel Extension for Scikit-learn 2021.3.0](https://docs.oracle.com/iaas/data-science/using/conda-sklearn-fam.htm) for CPU on Python 3.7 (version 1.0)\n",
49+
"Compatible conda pack: [Intel Extension for Scikit-learn 2021.3.0](https://docs.oracle.com/iaas/data-science/using/conda-sklearn-fam.htm) for CPU on Python 3.7 (version 1.0)\n",
5050
"\n",
5151
"## Contents:\n",
5252
"\n",
@@ -300,7 +300,7 @@
300300
"name": "python",
301301
"nbconvert_exporter": "python",
302302
"pygments_lexer": "ipython3",
303-
"version": "3.9.6"
303+
"version": "3.10.8"
304304
},
305305
"pycharm": {
306306
"stem_cell": {

notebook_examples/api_keys-authentication.ipynb

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@
77
"@notebook{api_keys-authentication.ipynb,\n",
88
" title: API Keys,\n",
99
" summary: Configure and test API key authentication, attach keys to user account through Oracle's identity service, and test access to the API.,\n",
10-
" developed on: generalml_p37_cpu_v1,\n",
10+
" developed on: generalml_p38_cpu_v1,\n",
1111
" keywords: authentication, api keys, iam, access management,\n",
1212
" license: Universal Permissive License v 1.0\n",
1313
"}"
@@ -43,7 +43,7 @@
4343
"\n",
4444
"The notebook session user is `datascience` and this user has no Oracle Cloud Infrastructure (OCI) Identity and Access Management (IAM) identity. Therefore, it cannot access OCI resources outside of the notebook. However, OCI provides two methods for making authenticated API calls to access OCI resources. They are Resource Principles and API (public and private) keys. This notebook demonstrates how to generate API keys.\n",
4545
"\n",
46-
"Compatible conda pack: [General Machine Learning](https://docs.oracle.com/en-us/iaas/data-science/using/conda-gml-fam.htm) for CPU on Python 3.7 (version 1.0)\n",
46+
"Compatible conda pack: [General Machine Learning](https://docs.oracle.com/en-us/iaas/data-science/using/conda-gml-fam.htm) for CPU on Python 3.8 (version 1.0)\n",
4747
"\n",
4848
"---\n",
4949
"\n",
@@ -387,7 +387,7 @@
387387
"name": "python",
388388
"nbconvert_exporter": "python",
389389
"pygments_lexer": "ipython3",
390-
"version": "3.9.6"
390+
"version": "3.10.8"
391391
},
392392
"toc-autonumbering": false,
393393
"vscode": {

notebook_examples/audi-autonomous_driving-oracle_open_data.ipynb

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -46,7 +46,7 @@
4646
"\n",
4747
"This notebook demonstrates how to download the dataset from Oracle Cloud Infrastructure (OCI) Object Storage and work with the JSON configuration file. It also demonstrates how to process image and LiDAR data, and display them.\n",
4848
"\n",
49-
"Developed on [Computer Vision](https://docs.oracle.com/en-us/iaas/data-science/using/conda-com-vision-fam.htm) for CPU on Python 3.7 (version 1.0)\n",
49+
"Compatible conda pack: [Computer Vision](https://docs.oracle.com/en-us/iaas/data-science/using/conda-com-vision-fam.htm) for CPU on Python 3.7 (version 1.0)\n",
5050
"\n",
5151
"---\n",
5252
"\n",
@@ -497,7 +497,7 @@
497497
"name": "python",
498498
"nbconvert_exporter": "python",
499499
"pygments_lexer": "ipython3",
500-
"version": "3.9.6"
500+
"version": "3.10.8"
501501
},
502502
"pycharm": {
503503
"stem_cell": {

notebook_examples/big_data_service-(BDS)-kerberos.ipynb

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -47,7 +47,7 @@
4747
"\n",
4848
"The Oracle Big Data Service ([BDS](https://docs.oracle.com/en-us/iaas/Content/bigdata/home.htm)) is an Oracle Cloud Infrastructure (OCI) service that is designed for big data use cases and supports Hadoop and Spark. BDS has features such as HDFS and Hive. You can use BDS for short-lived clusters used to tackle specific tasks, and long-lived clusters that manage large data lakes. To connect to BDS from a notebook session, the cluster must have Kerberos enabled. This notebook demonstrates how to configure Kerberos authentication using ADS.\n",
4949
"\n",
50-
"Developed on [PySpark 3.0 and Data Flow](https://docs.oracle.com/en-us/iaas/data-science/using/conda-pyspark-fam.htm) for CPU on Python 3.7 (version 5.0)\n",
50+
"Compatible conda pack: [PySpark 3.0 and Data Flow](https://docs.oracle.com/en-us/iaas/data-science/using/conda-pyspark-fam.htm) for CPU on Python 3.7 (version 5.0)\n",
5151
"\n",
5252
"---\n",
5353
"\n",
@@ -447,7 +447,7 @@
447447
"name": "python",
448448
"nbconvert_exporter": "python",
449449
"pygments_lexer": "ipython3",
450-
"version": "3.9.6"
450+
"version": "3.10.8"
451451
},
452452
"vscode": {
453453
"interpreter": {

notebook_examples/big_data_service-(BDS)-livy.ipynb

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -46,7 +46,7 @@
4646
"\n",
4747
"[Oracle Big Data Service](https://docs.oracle.com/en-us/iaas/Content/bigdata/home.htm) (BDS) is a fully managed Oracle Cloud Infrastructure (OCI) that provides long-lived [Apache Hadoop](https://hadoop.apache.org/) and [Apache Spark](https://spark.apache.org/) clusters. You can easily create secure and scalable Spark-based data lakes to process data at scale. This notebook demonstrates how to use [Apache Livy] to interactively work with a BDS Spark cluster. Two techniques are demonstrated, [SparkMagic](https://github.com/jupyter-incubator/sparkmagic) and a [REST](https://en.wikipedia.org/wiki/Representational_state_transfer) interface.\n",
4848
"\n",
49-
"Developed on [PySpark 3.0 and Data Flow](https://docs.oracle.com/en-us/iaas/data-science/using/conda-pyspark-fam.htm) for CPU on Python 3.7 (version 5.0)\n",
49+
"Compatible conda pack: [PySpark 3.0 and Data Flow](https://docs.oracle.com/en-us/iaas/data-science/using/conda-pyspark-fam.htm) for CPU on Python 3.7 (version 5.0)\n",
5050
"\n",
5151
"---\n",
5252
"\n",
@@ -725,7 +725,7 @@
725725
"name": "python",
726726
"nbconvert_exporter": "python",
727727
"pygments_lexer": "ipython3",
728-
"version": "3.9.6"
728+
"version": "3.10.8"
729729
},
730730
"vscode": {
731731
"interpreter": {

notebook_examples/caltech-pedestrian_detection-oracle_open_data.ipynb

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@
77
"@notebook{caltech-pedestrian_detection-oracle_open_data.ipynb,\n",
88
" title: Caltech Pedestrian Detection Benchmark Repository,\n",
99
" summary: Download and process annotated video data of vehicles and pedestrians.,\n",
10-
" developed on: generalml_p37_cpu_v1,\n",
10+
" developed on: generalml_p38_cpu_v1,\n",
1111
" keywords: caltech, pedestrian detection, oracle open data,\n",
1212
" license: Universal Permissive License v 1.0\n",
1313
"}"
@@ -46,7 +46,7 @@
4646
"\n",
4747
"This notebook demonstrates how to download the data from Oracle Cloud Infrastructure (OCI) Object Storage. It helps you understand the data and extract images from `.seq` files to a target folder.\n",
4848
"\n",
49-
"Developed on [General Machine Learning](https://docs.oracle.com/iaas/data-science/using/conda-gml-fam.htm) for CPU on Python 3.7 (version 1.0)\n",
49+
"Compatible conda pack: [General Machine Learning](https://docs.oracle.com/iaas/data-science/using/conda-gml-fam.htm) for CPU on Python 3.8 (version 1.0)\n",
5050
"\n",
5151
"---\n",
5252
"\n",
@@ -352,7 +352,7 @@
352352
"name": "python",
353353
"nbconvert_exporter": "python",
354354
"pygments_lexer": "ipython3",
355-
"version": "3.9.6"
355+
"version": "3.10.8"
356356
},
357357
"pycharm": {
358358
"stem_cell": {

notebook_examples/data_labeling-text_classification.ipynb

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -52,7 +52,7 @@
5252
"\n",
5353
"The purpose of the `data_labeling` module is to provide an efficient and convenient way for users to utilize OCI DLS in a notebook session.\n",
5454
"\n",
55-
"Developed on [Natural Language Processing](https://docs.oracle.com/iaas/data-science/using/conda-nlp-fam.htm) for CPU on Python 3.7 (version 2.0)\n",
55+
"Compatible conda pack: [Natural Language Processing](https://docs.oracle.com/iaas/data-science/using/conda-nlp-fam.htm) for CPU on Python 3.7 (version 2.0)\n",
5656
"\n",
5757
"---\n",
5858
"\n",
@@ -440,7 +440,7 @@
440440
"name": "python",
441441
"nbconvert_exporter": "python",
442442
"pygments_lexer": "ipython3",
443-
"version": "3.9.6"
443+
"version": "3.10.8"
444444
},
445445
"vscode": {
446446
"interpreter": {

notebook_examples/document-text_extraction.ipynb

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -45,7 +45,7 @@
4545
"\n",
4646
"The Accelerated Data Science (ADS) SDK provides a text extraction module. This module allows you to convert PDF and Microsoft Word files into plain text. The data is stored in Pandas dataframes so you can easily manipulate and save it. In this notebook, you read files of various file formats, and convert them into different formats to use for text manipulation. The notebook reviews several of the most common `DataLoader` commands and showcases some advanced features such as defining custom backend and file processor.\n",
4747
"\n",
48-
"Developed on [Natural Language Processing](https://docs.oracle.com/iaas/data-science/using/conda-nlp-fam.htm) for CPU on Python 3.7 (version 2.0)\n",
48+
"Compatible conda pack: [Natural Language Processing](https://docs.oracle.com/iaas/data-science/using/conda-nlp-fam.htm) for CPU on Python 3.7 (version 2.0)\n",
4949
"\n",
5050
"***\n",
5151
"\n",
@@ -713,7 +713,7 @@
713713
"name": "python",
714714
"nbconvert_exporter": "python",
715715
"pygments_lexer": "ipython3",
716-
"version": "3.9.6"
716+
"version": "3.10.8"
717717
},
718718
"vscode": {
719719
"interpreter": {

notebook_examples/genome_visualization-oracle_open_data.ipynb

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@
77
"@notebook{genome_visualization-oracle_open_data.ipynb,\n",
88
" title: Visual Genome Repository,\n",
99
" summary: Load visual data, define regions, and visualize objects using metadata to connect structured images to language.,\n",
10-
" developed on: generalml_p37_cpu_v1,\n",
10+
" developed on: generalml_p38_cpu_v1,\n",
1111
" keywords: object annotation, genome visualization, oracle open data\n",
1212
" license: Universal Permissive License v 1.0 (https://oss.oracle.com/licenses/upl/)\n",
1313
"}"
@@ -46,7 +46,7 @@
4646
"\n",
4747
"This notebook demonstrates how to download images and objects from Oracle Cloud Infrastructure (OCI) Object Storage, build dataframe from the JSON metadata files, how access the image data, define a region, and finally visualize regions along with descriptions on a chosen image.\n",
4848
"\n",
49-
"Developed on [General Machine Learning](https://docs.oracle.com/iaas/data-science/using/conda-gml-fam.htm) for CPU on Python 3.7 (version 1.0)\n",
49+
"Developed on [General Machine Learning](https://docs.oracle.com/iaas/data-science/using/conda-gml-fam.htm) for CPU on Python 3.8 (version 1.0)\n",
5050
"\n",
5151
"---\n",
5252
"\n",
@@ -353,7 +353,7 @@
353353
"name": "python",
354354
"nbconvert_exporter": "python",
355355
"pygments_lexer": "ipython3",
356-
"version": "3.9.6"
356+
"version": "3.10.8"
357357
},
358358
"pycharm": {
359359
"stem_cell": {

notebook_examples/hyperparameter_tuning.ipynb

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@
77
"@notebook{hyperparameter_tuning.ipynb,\n",
88
" title: Introduction to ADSTuner,\n",
99
" summary: Use ADSTuner to optimize an estimator using the scikit-learn API,\n",
10-
" developed on: generalml_p37_cpu_v1,\n",
10+
" developed on: generalml_p38_cpu_v1,\n",
1111
" keywords: hyperparameter tuning,\n",
1212
" license: Universal Permissive License v 1.0\n",
1313
"}"
@@ -44,7 +44,7 @@
4444
"\n",
4545
"A hyperparameter is a parameter that is used to control a learning process. This is in contrast to other parameters that are learned in the training process. The process of hyperparameter optimization is to search for hyperparameter values by building many models and assessing their quality. This notebook provides an overview of the `ADSTuner` hyperparameter optimization engine. `ADSTuner` can optimize any estimator object that follows the [scikit-learn API](https://scikit-learn.org/stable/modules/classes.html).\n",
4646
"\n",
47-
"Developed on [General Machine Learning](https://docs.oracle.com/en-us/iaas/data-science/using/conda-gml-fam.htm) for CPU on Python 3.7 (version 1.0)\n",
47+
"Compatible conda pack: [General Machine Learning](https://docs.oracle.com/en-us/iaas/data-science/using/conda-gml-fam.htm) for CPU on Python 3.8 (version 1.0)\n",
4848
"\n",
4949
"## Contents:\n",
5050
"\n",
@@ -643,7 +643,7 @@
643643
],
644644
"metadata": {
645645
"kernelspec": {
646-
"display_name": "Python 3.6.8 64-bit",
646+
"display_name": "Python 3.11.0 64-bit",
647647
"language": "python",
648648
"name": "python3"
649649
},
@@ -657,11 +657,11 @@
657657
"name": "python",
658658
"nbconvert_exporter": "python",
659659
"pygments_lexer": "ipython3",
660-
"version": "3.6.8"
660+
"version": "3.11.0"
661661
},
662662
"vscode": {
663663
"interpreter": {
664-
"hash": "916dbcbb3f70747c44a77c7bcd40155683ae19c65e1c03b4aa3499c5328201f1"
664+
"hash": "1a1af0ee75eeea9e2e1ee996c87e7a2b11a0bebd85af04bb136d915cefc0abce"
665665
}
666666
}
667667
},

notebook_examples/model_evaluation-with-ADSEvaluator.ipynb

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@
77
"@notebook{model_evaluation-with-ADSEvaluator.ipynb,\n",
88
" title: Model Evaluation with ADSEvaluator,\n",
99
" summary: Train and evaluate different types of models: binary classification using an imbalanced dataset, multi-class classification using a synthetically generated dataset consisting of three equally distributed classes, and a regression using a synthetically generated dataset with positive targets.,\n",
10-
" developed on: generalml_p37_cpu_v1,\n",
10+
" developed on: generalml_p38_cpu_v1,\n",
1111
" keywords: model evaluation, binary classification, regression, multi-class classification, imbalanced dataset, synthetic dataset,\n",
1212
" license: Universal Permissive License v 1.0\n",
1313
"}"
@@ -46,7 +46,7 @@
4646
"\n",
4747
"Specifically, the notebook will focus on binary classification using an imbalanced dataset, multi-class classification using a synthetically generated dataset consisting of three equally distributed classes and lastly a regression using a synthetically generated dataset with positive targets. The training is done using a standard library, and subsequently, the models would be evaluated using `ADSEvaluator`.\n",
4848
"\n",
49-
"Developed on [General Machine Learning](https://docs.oracle.com/en-us/iaas/data-science/using/conda-gml-fam.htm) for CPU on Python 3.7 (version 1.0)\n",
49+
"Compatible conda pack: [General Machine Learning](https://docs.oracle.com/en-us/iaas/data-science/using/conda-gml-fam.htm) for CPU on Python 3.8 (version 1.0)\n",
5050
"\n",
5151
"## Contents:\n",
5252
"\n",

notebook_examples/model_version_set.ipynb

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -49,7 +49,7 @@
4949
"\n",
5050
"Use the ``.create()`` method to create a model version set in your tenancy. If the model version set already exists in the model catalog, then use the ``.from_id()`` and ``from_name()`` methods to create a ``ModelVersionSet`` object based on the specified model version set. If you make changes to the metadata associated with the model version set, use the ``.update()`` method to push those changes to the model catalog. The ``.list()`` method lists all model version sets. To add an existing model to a model version set, use the ``.add_model()`` method. The ``.models()`` method lists the models in the model version set. Use the ``.delete()`` method to delete a model version set from the model catalog.\n",
5151
"\n",
52-
"Developed on conda environment: ``Oracle Database and Data Exploration for CPU Python 3.8``\n",
52+
"Compatible conda pack: [Oracle Database and Data Exploration](https://docs.oracle.com/en-us/iaas/data-science/using/conda-dem-fam.htm) for CPU Python 3.8\n",
5353
"\n",
5454
"---\n",
5555
"\n",
@@ -535,7 +535,7 @@
535535
"name": "python",
536536
"nbconvert_exporter": "python",
537537
"pygments_lexer": "ipython3",
538-
"version": "3.9.1"
538+
"version": "3.10.8"
539539
},
540540
"vscode": {
541541
"interpreter": {

notebook_examples/natural_language_processing.ipynb

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -46,7 +46,7 @@
4646
"\n",
4747
"Data scientists need to be able to quickly and easily manipulate strings. The Accelerated Data Science (ADS) SDK provides an enhanced string class, called `ADSString`. It adds functionalty like regular expression (RegEx) matching and natural language processing (NLP) parsing. The class can be expanded by registering custom plugins so that you can process a string in a way that it fits your specific needs. For example, you can register the OCI AI Language service plugin to bind functionalities from the Language service to `ADSString`.\n",
4848
"\n",
49-
"Developed on [Natural Language Processing](https://docs.oracle.com/iaas/data-science/using/conda-nlp-fam.htm) for CPU on Python 3.7 (version 2.0)\n",
49+
"Compatible conda pack: [Natural Language Processing](https://docs.oracle.com/iaas/data-science/using/conda-nlp-fam.htm) for CPU on Python 3.7 (version 2.0)\n",
5050
"\n",
5151
"---\n",
5252
"\n",
@@ -924,7 +924,7 @@
924924
],
925925
"metadata": {
926926
"kernelspec": {
927-
"display_name": "Python 3.6.8 64-bit",
927+
"display_name": "Python 3.11.0 64-bit",
928928
"language": "python",
929929
"name": "python3"
930930
},
@@ -938,11 +938,11 @@
938938
"name": "python",
939939
"nbconvert_exporter": "python",
940940
"pygments_lexer": "ipython3",
941-
"version": "3.6.8"
941+
"version": "3.11.0"
942942
},
943943
"vscode": {
944944
"interpreter": {
945-
"hash": "916dbcbb3f70747c44a77c7bcd40155683ae19c65e1c03b4aa3499c5328201f1"
945+
"hash": "1a1af0ee75eeea9e2e1ee996c87e7a2b11a0bebd85af04bb136d915cefc0abce"
946946
}
947947
}
948948
},

notebook_examples/pipelines-ml_lifecycle.ipynb

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@
88
"@notebook{pipelines-ml_lifecycle.ipynb,\n",
99
" title: Working with Pipelines [Limited Availability],\n",
1010
" summary: Create and use ML pipelines through the entire machine learning lifecycle,\n",
11-
" developed on: generalml_p37_cpu_v1,\n",
11+
" developed on: generalml_p38_cpu_v1,\n",
1212
" keywords: pipelines, pipeline step, jobs pipeline, \n",
1313
" license: Universal Permissive License v 1.0\n",
1414
"}"
@@ -50,7 +50,7 @@
5050
"\n",
5151
"This notebook uses the Accelerated Data Science (ADS) SDK to construct, control, and leverage pipelines within the Oracle Data Science service.\n",
5252
"\n",
53-
"Developed on [General Machine Learning](https://docs.oracle.com/en-us/iaas/data-science/using/conda-gml-fam.htm) for CPU on Python 3.7 (version 1.0)\n",
53+
"Compatible conda pack: [General Machine Learning](https://docs.oracle.com/en-us/iaas/data-science/using/conda-gml-fam.htm) for CPU on Python 3.8 (version 1.0)\n",
5454
"\n",
5555
"---\n",
5656
"\n",

notebook_examples/pyspark-data_catalog-hive_metastore-data_flow.ipynb

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -46,7 +46,7 @@
4646
"\n",
4747
"This notebook demonstrates how to write and test a Data Flow batch application using the Oracle Cloud Infrastructure (OCI) Data Catalog Metastore. [Oracle Cloud Infrastructure (OCI) Data Catalog](https://docs.oracle.com/en-us/iaas/data-catalog/home.htm) is a metadata management service that helps data professionals discover data and support data governance. The [Data Catalog Hive Metastore](https://docs.oracle.com/en-us/iaas/data-catalog/using/metastore.htm) provides schema definitions for objects in structured and unstructured data assets backed by Object Store. [Data Flow](https://docs.oracle.com/en-us/iaas/data-flow/using/home.htm) is a fully managed service for running [Apache Spark](https://spark.apache.org/) applications. You write and test a Data Flow batch application using the Data Catalog Metastore in this notebook.\n",
4848
"\n",
49-
"Developed on [PySpark 3.0 and Data Flow](https://docs.oracle.com/en-us/iaas/data-science/using/conda-pyspark-fam.htm) for CPU on Python 3.7 (version 5.0)\n",
49+
"Compatible conda pack: [PySpark 3.0 and Data Flow](https://docs.oracle.com/en-us/iaas/data-science/using/conda-pyspark-fam.htm) for CPU on Python 3.7 (version 5.0)\n",
5050
"\n",
5151
"---\n",
5252
"\n",
@@ -412,7 +412,7 @@
412412
],
413413
"metadata": {
414414
"kernelspec": {
415-
"display_name": "Python 3.6.8 64-bit",
415+
"display_name": "Python 3.11.0 64-bit",
416416
"language": "python",
417417
"name": "python3"
418418
},
@@ -426,11 +426,11 @@
426426
"name": "python",
427427
"nbconvert_exporter": "python",
428428
"pygments_lexer": "ipython3",
429-
"version": "3.6.8"
429+
"version": "3.11.0"
430430
},
431431
"vscode": {
432432
"interpreter": {
433-
"hash": "916dbcbb3f70747c44a77c7bcd40155683ae19c65e1c03b4aa3499c5328201f1"
433+
"hash": "1a1af0ee75eeea9e2e1ee996c87e7a2b11a0bebd85af04bb136d915cefc0abce"
434434
}
435435
}
436436
},

0 commit comments

Comments
 (0)