Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
The table of contents is too big for display.
Diff view
Diff view
  •  
  •  
  •  
Original file line number Diff line number Diff line change
@@ -1,3 +1,22 @@
# Changelog

## 2025 R2.1
This file lists the changes introduced in the AVxcelerate Explore API v0.1.0 released in 2025 R2.

The AVxcelerate Explore API v0.1.0 is compatible with the AVX Architecture V2.

The AVxcelerate Explore API v0.1.0 allows you to perform operations related to jobs and create post job sessions.

## Features

### Jobs:
The API allows you to perform the following operations related to jobs:
* Create a new job as per provided parameters
* Get a single or multiple jobs by their IDs
* Kill a running job
* Restart a completed or killed job
* Clone a job
### Post Job Sessions:
* This API allows you to create a post job session for a job and returns the relative URL to access the session.
* The session can then be used to perform various post-processing operations such as: viewing results, analyzing data etc.


Original file line number Diff line number Diff line change
@@ -1,12 +1,26 @@
# AVx Python APIs Documentation

## Background

The AVx python APIs are hosted as a python package on cluster as part of the Explore service deployment. The developers can install the package using pip and use it to call AVx autonomy APIs without the need to make raw REST calls.
## Features

### Jobs:
The API allows you to perform the following operations related to jobs:
* Create a new job as per provided parameters
* Get a single or multiple jobs by their IDs
* Kill a running job
* Restart a completed or killed job
* Clone a job
### Post Job Sessions:
* This API allows you to create a post job session for a job and returns the relative URL to access the session.
* The session can then be used to perform various post-processing operations such as: viewing results, analyzing data etc.


## Introduction

The AVx python APIs are hosted as a python package on a cluster as part of the Explore service deployment. The developers can install the package using pip and use it to call AVx autonomy APIs without the need to make raw REST calls.

PyPi Regsitry URL:

The python package is hosted as PyPi compliant registry on each deployed cluster. The registry URL looks like this:
The python package is hosted as PyPi compliant registry on each deployed cluster. The registry URL is as written below:

https://BASE_URL/pypi

Expand All @@ -18,7 +32,8 @@ For example, for AFT deployment:

Pre-requisites:

We are assuming that on the system is running **Ubuntu 22.04**, there are following already installed:

We assume that on the system is running on the **Ubuntu 22.04** version, and that the following tools are already installed:

- python 3.10
- pip 25.1
Expand Down
Original file line number Diff line number Diff line change
@@ -1,3 +1,24 @@
# Changelog

## 2025 R2.1
This file lists the changes introduced in the AVxcelerate Resource Manager API v0.1.0 released in 2025 R2.

The AVxcelerate Resource Manager API v0.1.0 is compatible with the AVX Architecture V2.

This API allows to perform CRUD (Create, Read, Update, and Delete) operations on resources such as queues, deployments, applications and app-runtime-configurations.

## Features

### Queues:
* You can create queues with the required storages, resource limits and environment variables
* Allowing to manage queues helps will help you configure different applications within resource limits and group the applications requiring same storage together.
* You can adjust the maximum number of workers that can concurrently run on a queue using the parameter 'maximum_allowed_worker_instances'

### Plugins:
You can register a plugin with definition of container runtime.
For example: Docker Engine / Kubernetes

### Jobs:
* You can submit a resource-manager job by providing application details (name, version, image, environment variables, etc.) and track it to its completion.
* You can also check the status of the job and clean the resources the job has acquired.


Large diffs are not rendered by default.

Original file line number Diff line number Diff line change
@@ -1,12 +1,28 @@
# AVx Python APIs Documentation

## Background
## Features

The AVx python APIs are hosted as a python package on cluster as part of the Explore service deployment. The developers can install the package using pip and use it to call AVx autonomy APIs without the need to make raw REST calls.
This API allows to perform CRUD (Create, Read, Update, and Delete) operations on resources such as queues, deployments, applications and app-runtime-configurations.

### Queues:
* You can create queues with the required storages, resource limits and environment variables
* Allowing to manage queues helps will help you configure different applications within resource limits and group the applications requiring same storage together.
* You can adjust the maximum number of workers that can concurrently run on a queue using the parameter 'maximum_allowed_worker_instances'

### Plugins:
* You can register a plugin with definition of container runtime.
For example: Docker Engine / Kubernetes

### Jobs:
* You can submit a resource-manager job by providing application details (name, version, image, environment variables, etc.) and track it to its completion.
You can also check the status of the job and clean the resources the job has acquired.

## Introduction

The AVxcelerate python APIs are hosted as a python package on a cluster as part of the Explore service deployment. The developers can install the package using pip and use it to call AVx autonomy APIs without needing to make raw REST calls.

PyPi Regsitry URL:

The python package is hosted as PyPi compliant registry on each deployed cluster. The registry URL looks like this:
The python package is hosted as PyPi compliant registry on each deployed cluster. The registry URL is like this:

https://BASE_URL/pypi

Expand All @@ -18,7 +34,7 @@ For example, for AFT deployment:

Pre-requisites:

We are assuming that on the system is running **Ubuntu 22.04**, there are following already installed:
We assume that on the system is running with the **Ubuntu 22.04** version, and that the following tools are already installed:

- python 3.10
- pip 25.1
Expand All @@ -28,13 +44,13 @@ And we assume that you are using AVx Autonomy Toolchain version **25R2.1**

Step 1: Create virtual environment

```
```bash
$ python -m venv .venv
```

Step 2: Activate the virtual environment

```
```bash
$ source .venv/bin/activate
```

Expand All @@ -43,13 +59,14 @@ Step 3: Install python packages:
- ansys-api-avxcelerate-autonomy
- ansys-avxcelerate-autonomy

```
```bash
$ pip install ansys-api-avxcelerate-autonomy ansys-avxcelerate-autonomy --extra-index-url [https://explore-service.traefik.me:9081/pypi](https://explore-service.traefik.me:9081/v1/pypi)
```

Step 4: Use ansys-api-avxcelerate-autonomy and ansys-avxcelerate-autonomy in your python code
```

```

```python
import asyncio

from ansys.api.avxcelerate.autonomy.explore_service.v1.api.jobs_api import JobsApi
Expand Down Expand Up @@ -105,4 +122,5 @@ print(str(ex))
print("Couldn't get job against this job id")

asyncio.run(main())
```
```

84 changes: 43 additions & 41 deletions 2025R2/dpf-framework-25-r2/core-concepts/dpf-types.md
Original file line number Diff line number Diff line change
Expand Up @@ -63,10 +63,10 @@ The ``data_sources`` holds information on files of interest such as:
- their associated namespace
- their associated key (usually the file extension)

#### Streaming

<a id="streams-container"></a>

#### Streaming

The ``streams_container`` is the result of opening a stream to files in a ``data_sources``.
It enables streaming to and from the files and handles caching of data requests to the files.

Expand All @@ -76,58 +76,59 @@ The use of a ``streams_container`` is highly recommended whenever possible to be

The following DPF objects hold light-weight data (metadata) relative to other DPF types.

#### Field metadata

<a id="field-definition"></a>

The ``field_definition`` holds metadata relative to a ``field``.
#### Field metadata

#### Mesh metadata
The ``field_definition`` holds metadata relative to a ``field``.

<a id="mesh-info"></a>

#### Mesh metadata

The ``mesh_info`` holds metadata relative to a ``meshed_region``.

Only available for CFF and LSDYNA.

#### Result file metadata

<a id="result-info"></a>

The ``result_info`` holds metadata relative to available results in files from a ``data_sources``.
#### Result file metadata

### Data supports
The ``result_info`` holds metadata relative to available results in files from a ``data_sources``.

<a id="support"></a>

### Data supports

Supports are entities dedicated to holding information about the model itself.

Every ``field`` requires a ``support`` for DPF to understand what its data is related to.

This concept allows DPF to manage simulation data efficiently.

#### Mesh

<a id="meshed-region"></a>

#### Mesh

The ``meshed_region`` holds information relative to the mesh of the simulation model.

It gives access to several fields of data such as:
- the mesh connectivity
- the node coordinates
- the element types

#### Time and frequency

<a id="time-freq-support"></a>

#### Time and frequency

For time and pseudo time based simulations or for frequency based simulations, the ``time_freq_support`` holds information about the
simulation steps and sub-steps, time-steps, or mode/harmonic frequencies.

#### Cyclic

<a id="cyclic-support"></a>

#### Cyclic

For cyclic simulation models, the ``cyclic_support`` holds information about the number of sectors and the number of stages.

### Filtering
Expand Down Expand Up @@ -156,10 +157,10 @@ Each ``field`` data storage type has a ``scoping`` associated to it, describing

The following DPF types allow you to store and describe data.

#### Data map

<a id="generic-data-container"></a>

#### Data map

The ``generic_data_container`` allows you to store any type known to DPF as a property with a given name.

#### Data tree
Expand All @@ -168,10 +169,10 @@ The ``generic_data_container`` allows you to store any type known to DPF as a pr

The ``data_tree`` allows you to store DPF known types as named attributes of a data tree with sub-trees.

#### Data arrays

<a id="arrays"></a>

#### Data arrays

The following types represent arrays of data.

The data of a ``field`` is always associated to a ``scoping`` (entities associated to each value) and ``support`` (subset of the model where the data is), making the ``field`` a self-describing piece of data.
Expand All @@ -180,28 +181,28 @@ Example 1: a ``field`` that describes the evolution in time of the static pressu

Example 2: a ``field`` that describes the evolution in space of the stress at a given body of a structural model has a scoping comprised of the element ids where the stress is defined and a ``meshed_region`` support that contextualizes these element ids.

##### Doubles

<a id="field"></a>

The ``field`` represents an array of double values.
##### Doubles

##### Integers
The ``field`` represents an array of double values.

<a id="property-field"></a>

The ``property_field`` represents an array of integer values.
##### Integers

##### Strings
The ``property_field`` represents an array of integer values.

<a id="string-field"></a>

The ``string_field`` represents an array of string values.
##### Strings

##### Custom
The ``string_field`` represents an array of string values.

<a id="custom-type-field"></a>

##### Custom

The ``custom_type_field`` represents an array of values of a custom type as defined by the unitary type of the field.

### Collections
Expand All @@ -213,6 +214,7 @@ DPF allows you to group DPF types in labeled collections.
A DPF ``collection`` has a set of associated labels, for which each entry has a value. This allows you to distinguish between entries and retrieve them.

#### Label space

<a id="label-space"></a>

The ``label_space`` is a map of ("label": integer value) couples used to target a subset of entries in a collection.
Expand All @@ -221,46 +223,46 @@ For example, if a ``collection`` has labels ``material`` and ``part``, each enti

A ``label_space`` such as ``{"material": X, "part": Y}`` then targets a single entity in the collection, whereas one such as ``{"material": X}`` targets all entries of material "X".

#### Base collection

<a id="collection"></a>

The ``collection`` is the generic type for collections of DPF entities.
#### Base collection

#### Collection of any
The ``collection`` is the generic type for collections of DPF entities.

<a id="any-collection"></a>

The ``any_collection`` is a collection of ``Any`` objects.
#### Collection of any

#### Collection of fields of custom type
The ``any_collection`` is a collection of ``Any`` objects.

<a id="custom-type-fields-container"></a>

The ``custom_type_fields_container`` is a collection of ``custom_type_field`` instances.
#### Collection of fields of custom type

#### Collection of fields of doubles
The ``custom_type_fields_container`` is a collection of ``custom_type_field`` instances.

<a id="fields-container"></a>

The ``fields_container`` is a collection of ``field`` instances.
#### Collection of fields of doubles

#### Collection of fields of integers
The ``fields_container`` is a collection of ``field`` instances.

<a id="property-fields-container"></a>

The ``property_fields_container`` is a collection of ``property_field`` instances.
#### Collection of fields of integers

#### Collection of meshes
The ``property_fields_container`` is a collection of ``property_field`` instances.

<a id="meshes-container"></a>

The ``meshes_container`` is a collection of ``meshed_region`` instances.
#### Collection of meshes

#### Collection of scopings
The ``meshes_container`` is a collection of ``meshed_region`` instances.

<a id="scopings-container"></a>

#### Collection of scopings

The ``scopings_container`` is a collection of ``scoping`` instances.

### Unit systems
Expand Down
Loading