Skip to content

soulmachines/smandroidjava

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

22 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

(smandroidjava) Soul Machines Android Java Sample Project

This project shows how to use the Soul Machines Android SDK and pull the library into your own projects. It also shows some of the basic SDK functionality for reference.

Project Setup

  • Open/import this project in Android Studio as a gradle project.
  • Build the project and it should now download all the dependencies.

Importing the library

Add the maven repository

To import the library into your own project, add the following entries to the app/build.gradle file.

 repositories {       
	 maven {    
          url "https://sm-maven-repo-bucket.s3.amazonaws.com"
    }    
}

Import the library

Add the following dependencies to the app/build.gradle

 dependencies {        
	 implementation 'com.soulmachines.android:smsdk-core:1.3.0'
}

Library Documentation

Documentation for the core sdk is included and should be available here 'app/build/docs/smsdk-core/index.html' after running the task below:

./gradlew app:getSmSdkDocumentation

In the app/build.gradle there is a gradle documentation configuration specified as well as a dependency using this configuration. This is used by the getSmSdkDocumentation task to extract the supplied documentation.

 dependencies {
	 documentation 'com.soulmachines.android:smsdk-core:1.3.0:docs@zip'
}

Create and connect the Scene

Using View Containers

  • Create android.view.ViewGroup container views for the remote persona view (required) and the local video view (optional) on your layout xml where the Scene will be rendered
    e.g. activity_main.xml
<FrameLayout      
   android:id="@+id/fullscreenPersonaView"      
   android:layout_width="0dp"      
   android:layout_height="0dp"      
   app:layout_constraintBottom_toBottomOf="parent"      
   app:layout_constraintLeft_toLeftOf="parent"      
   app:layout_constraintRight_toRightOf="parent"      
   app:layout_constraintTop_toTopOf="parent" />
   
<FrameLayout      
   android:id="@+id/pipLocalVideoView"      
   android:layout_width="120dp"      
   android:layout_height="120dp"      
   app:layout_constraintTop_toTopOf="parent"      
   app:layout_constraintLeft_toLeftOf="parent"      
   android:layout_marginLeft="16dp"      
   android:layout_marginTop="16dp" />      
  • Create a Scene object and specify the required UserMedia and then set the views on the Scene where you want to render the video feeds. The 2nd parameter (local video view) is optional and can be specified as null. An example is shown below:
scene = new SceneImpl(this, UserMedia.MicrophoneAndCamera);
scene.setViews(binding.fullscreenPersonaView, binding.pipLocalVideoView);

Using a Custom Layout

  • Create a custom layout xml where the Scene video feeds will be rendered. Ensure it has the following child views with the following predefined ids: @id/remote_video_view and @id/local_video_view of the type org.webrtc.SurfaceViewRenderer.
    e.g. custom_scene_layout.xml
<?xml version="1.0" encoding="utf-8"?> 
<androidx.constraintlayout.widget.ConstraintLayout      
    xmlns:android="http://schemas.android.com/apk/res/android"      
    xmlns:app="http://schemas.android.com/apk/res-auto"      
    xmlns:tools="http://schemas.android.com/tools"      
    android:layout_width="match_parent"      
    android:layout_height="match_parent"      
    android:background="@android:color/black">      
      
    <org.webrtc.SurfaceViewRenderer      
        android:id="@id/remote_video_view"      
        android:layout_width="0dp"      
        android:layout_height="0dp"      
        app:layout_constraintBottom_toBottomOf="parent"      
        app:layout_constraintLeft_toLeftOf="parent"      
        app:layout_constraintRight_toRightOf="parent"      
        app:layout_constraintTop_toTopOf="parent"      
        />      
      
    <org.webrtc.SurfaceViewRenderer      
        android:id="@id/local_video_view"      
        android:layout_width="120dp"      
        android:layout_height="120dp"      
        app:layout_constraintBottom_toBottomOf="parent"      
        app:layout_constraintLeft_toLeftOf="parent"      
		android:layout_marginLeft="16dp"
		android:layout_marginBottom="24dp" /> 
</androidx.constraintlayout.widget.ConstraintLayout> 
  • Include this layout or embed directly to your Activity's layout. e.g. In your activity's layout file <include android:id="@+id/scene" layout="@layout/custom_scene_layout"/>

  • Create a Scene object and specify the required UserMedia
    and then set the views on the Scene but use the instance of the custom layout you've defined.

scene = new SceneImpl(this, UserMedia.MicrophoneAndCamera);
scene.setViews(binding.scene, binding.scene);

In the snippet above, it uses the same custom layout for both the remote and local video feeds, but you can specify a separate one for each as long as you use the correct predefined id for the corresponding child video view

Connection methods

The SDK supports two connection methods: connecting with an API Key generated through DDNA Studio, and connecting with a web-socket URL and JWT.

Connecting using an API Key.

Establish a connection by providing the API Key generated within DDNA Studio. Provide optional userText to send a message to the Orchestration server during connection, and a RetryOptions object specifying the number of connection attempts and the delay between attempting a connection, should the connection encounter an error.

scene.connect("DDNA_STUDIO_GENERATED_API_KEY", null, RetryOptions.getDEFAULT())

Connecting using a valid web-socket URL and a valid JWT.

scene.connect("wss://dh.soulmachines.cloud", null, "JWT_ACCESS_TOKEN", RetryOptions.getDEFAULT());

Connection Result

On the provided API (e.g. Scene and Persona), all the asynchronous method calls provide a way such that you can subscribe to the result (whether it was successful or resulted in an error). These methods will return a Completable/Cancellable result from which you can subscribe to the result by passing in a Completion callback. This interface accepts a generic type parameter that determines the type of the response for a successful result.

Here's an example of a subscription to the scene connection result:

scene.connect(connectionUrl, null, jwtToken, RetryOptions.getDEFAULT()).subscribe(
                new Completion<SessionInfo>() {
                    @Override
                    public void onSuccess(SessionInfo sessionInfo) {
                        runOnUiThread(() -> onConnectedUI());
                    }
                    @Override
                    public void onError(CompletionError completionError) {
                        runOnUiThread(() -> {
                            displayAlertAndResetUI(getString(R.string.connection_error), completionError.getMessage());
                        });
                    }
            });

Register event listeners on the Scene

The Scene and Persona api also provides a way to register event listeners that might be necessary to interact with the digital human. For these event listeners, the pattern is add{Type}EventListener and remove{Type}EventListener*. For both these methods, a {Type}EventListener implementation is passed as a parameter.

Here's an example showing a listener for a disconnection event for the Scene:

scene.addDisconnectedEventListener(reason -> runOnUiThread(() -> onDisconnectedUI(reason)));

Scene Messages

One way to interact with a Digital Human is achieved through Scene Messaging. This part of the Scene#addSceneMessageListener api allows you to register a listener for when these Scene messages are received. To register a message listener, create an instance of a com.soulmachines.android.smsdk.core.scene.message.SceneMessageListener or alternatively an instance of the adaptor class com.soulmachines.android.smsdk.core.scene.message.SceneMessageListenerAdaptor and only override the specific Scene Message you are interested with. Here is an example using the SceneMessageListener:

scene.addSceneMessageListener(new SceneMessageListener() {
    @Override
    public void onUserTextEvent(String userText) {
        // userText from server received
    }

    @Override
    public void onStateMessage(SceneEventMessage<StateEventBody> sceneEventMessage) {
        //consume the scene `state` message
    }

    @Override
    public void onRecognizeResultsMessage(SceneEventMessage<RecognizeResultsEventBody> sceneEventMessage) {
        //consume the scene `recognizedResults` message
    }

    @Override
    public void onConversationResultMessage(SceneEventMessage<ConversationResultEventBody> sceneEventMessage) {
        //consume the scene `conversationResult` message
    }
});

Persona API

A Persona instance is the api to use to interact with a Digital Human. After a successful connection to a scene and the initial 'state' is established, a Persona instance can be obtained from the Scene#getPersonas() api.

There is also PersonaReadyListener you can add to the Scene to get notified of when the Persona becomes available rather than poll and wait for the 'state' event message.

scene.addPersonaReadyListener(p -> {
    persona = p;
});

An example of usages of the Persona API (see MainActivity#changeCameraView for an example):

// make the persona look to the left
if(!scene.getPersonas().isEmpty()) {
    Persona persona = scene.getPersonas().get(0);
    showToastMessage("Changing camera view to the " + direction.toString());
    Log.i(TAG, "CameraView: " + direction.toString());
    persona.animateToNamedCameraWithOrbitPan(getNamedCameraAnimationParam(direction));
}

Feature Flags

The Scene contains a Features object populated shortly after the connection has established. This can be checked to determine whether any DDNA Studio level FeatureFlags have been enabled on the Persona. Supported FeatureFlags are found within the SDK documentation.

boolean isContentAwarenessSupported = scene.getFeatures().isFeatureEnabled(FeatureFlags.UI_CONTENT_AWARENESS);

Content Awareness

If the Persona has the Content Awareness FeatureFlag enabled in DDNA Studio, classes inheriting from Content can be added to the Scene.getContentAwareness(). When executing ContentAwareness.syncContentAwareness(), these coordinates will be sent to the Persona, and it will glance or move out of the way of content as appropriate.

To add a Content item to the ContentAwareness, call Scene.getContentAwareness().addContent(content: Content). Content can be removed either by reference or by its String id.

Example:

Rect bounds = new Rect(x, y, width, height);
Content content = new ContentImpl(bounds);
scene.getContentAwareness().addContent(content);
scene.getContentAwareness().syncContentAwareness();

Content Inheritance

To be added to the ContentAwareness, objects need to inherit from Content. This ensures that conforming items provide the necessary information for the Persona to be aware of their frames within the App or you can call removeAllContent to remove all contents.

This information is as follows:

  • getId: A unique identifier for the content. Content with duplicate ID will replace each other. Note that if the ID matches the id provided to showcards(id), the Persona will gesture at the content.
  • getRect: A Rect of the coordinates the content exists at. This is made up of x1, x2, y1, y2.
  • getMeta: A dictionary of metadata to associate with the Content.

See below for examples.

public class ContentImpl implements Content {
    static int uniqueId = 1;

    private final String id = "object-" + Integer.toString(uniqueId++);
    private final Rect bounds;

    public ContentImpl(Rect r) {
        this.bounds = r;
    }

    @NonNull
    @Override
    public String getId() {
        return id;
    }

    @Nullable
    @Override
    public Map<String, Object> getMeta() {
        return null;
    }

    @NonNull
    @Override
    public Rect getRect() {
        return bounds;
    }
}

Example

Note that positions are absolute, and should be determined based on the root view when getBounds() is called. 
- '==' and '||' demonstrates the frame of the App Window.
- '--' and '|' demonstrates the frame of the Remote View.
- '<n>' demonstrates a Content instance.

======================
||  --------------  ||
||  | <1> _      |  ||
||  |    / \     |  ||
||  |    \_/  <2>|  ||
||  |   __^__    |  ||
||  |  /     \   |  ||
||  | /       \  |  ||
||  --------------  ||
||        <3>       ||
======================
Approx example coordinates
<1> x1: 100, y1: 100, x2: 150, y2: 150
- As this content is displayed within the frame of the Remote View, if Content Awareness is enabled it will cut to a different position to attempt to prevent the content appearing on top of the Persona. If the Id of the Content is referenced in conversation, the Persona will gesture at the coordinates.

<2> x1: 300, y1: 200, x2: 350, y1: 250
- As this content is displayed within the frame of the Remote View, if Content Awareness is enabled it will cut to a different position to attempt to prevent the content appearing on top of the Persona. If the Id of the Content is referenced in conversation, the Persona will gesture at the coordinates.

<3> x1: 200, y1: 400, x2: 250, y2: 450
- As this coordinate is outside of the Remote View, the Persona will not need to avoid this.

=====================================
||   ____________________          ||
||  |     _             |   <2>    ||
||  |    / \            |          ||
||  |    \_/     <1>    |          ||
||  |   __^__           |          ||
||  |  /     \          |          ||
||  |_/_______\_________|          ||
=====================================
Approx example coordinates
<1> x1: 300, y1: 150, x2: 350, y2: 200
- As this coordinate is possible to overlap the Persona, if Content Awareness is enabled it will cut to a different position to attempt to prevent the content appearing on top of the Persona. If the Id of the Content is referenced in conversation, the Persona will gesture at the coordinates.

<2> x1: 450, y1: 100, x2: 500, y2: 150
- As this coordinate is outside of the Remote View, the Persona will not need to avoid this.

Audio/Video Toggle

A Scene instance can be created using the SceneFactory and an initial UserMedia must be specified. The following are available:

UserMedia.None UserMedia.Microphone UserMedia.Camera UserMedia.MicrophoneAndCamera

This requires that the permissions must have already been granted for the required UserMedia. i.e. to use the UserMedia.Microphone, the permission android.permission.RECORD_AUDIO needs to be requested and granted. Similarly for UserMedia.Camera, the permission android.permission.CAMERA is needed and both are required for UserMedia.MicrophoneAndCamera.

The android framework provides mechanisms for requesting the necessary permissions and is the app developer's responsibility to request these required permissions at the appropriate time within the app experience.

An example implementation is to create the Scene instance first using the UserMedia.None and then as soon as they request to enable use of the user microphone or camera, you can request for the necessary permission and then call the updateUserMedia() method on the Scene instance to update it based on the permissions granted by the user.

Licensing

This repository is licensed under the Apache License, Version 2.0. See LICENSE for the full license text.

Issues

For any issues, please reach out to one of our Customer Success team members.

About

Soul Machine Android SDK Java Sample App

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages