Skip to content
An M5Stack library for rendering avatar faces
Branch: master
Clone or download
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
.github/ISSUE_TEMPLATE
.vscode Ignore user-local settings Nov 6, 2018
docs/image Fix image file name May 21, 2018
examples Add example for setting position Nov 6, 2018
lib
src
.editorconfig
.gitignore
.travis.yml Add CI settings for the new example Aug 22, 2018
CODE_OF_CONDUCT.md
CONTRIBUTING.md Create CONTRIBUTING.md Jul 27, 2018
CPPLINT.cfg
DESIGN.md
LICENSE.txt Add LICENSE.txt Apr 28, 2018
PULL_REQUEST_TEMPLATE.md Create PULL_REQUEST_TEMPLATE.md Jul 27, 2018
README.md Add move function to README Nov 6, 2018
README_ja.md Add move function to README Nov 6, 2018
build-example.sh
component.mk
library.json
library.properties Version 1.7.1 Dec 3, 2018
platformio.ini

README.md

M5Stack-Avatar

Powered By PlatformIO Build Status

M5Stack-Avatar

Video: https://www.youtube.com/watch?v=C1Hj9kfY5qc

日本語

Features

  • 😐 Draw avatar face
  • 😄 Expression(Happy, Angry, Sad etc.)
  • 😺 Customize face
  • 💋 Lip sync
  • 🎨 Color Palette
  • 🔃 Move, Zoom and Rotation

Installation

Prerequisites

Using Arduino IDE

  • On Arduino IDE, Select "Sketch > Include Library > Manage Libraries..."
  • Search "m5stack avatar"
  • Select "M5Stack_Avatar" from the results then click "Install"
  • The library gets installed

Using Platform IO

  • Initialize your Platform IO project
mkdir my-avatar
cd my-avatar
platformio init -d . -b m5stack-core-esp32
  • Install the library and its dependency
platformio lib install M5Stack
platformio lib install M5Stack-Avatar
  • The library gets downloaded from repository to .piolibdeps directory

Usage

#include <M5Stack.h>
#include <Avatar.h>

using namespace m5avatar;

Avatar avatar;

void setup()
{
  M5.begin();
  avatar.init(); // start drawing
}

void loop()
{
  // avatar's face updates in another thread
  // so no need to loop-by-loop rendering
}

Using LipSync

  • setup AquesTalk-ESP32 (http://blog-yama.a-quest.com/?eid=970195).

    • (For parsing Kainji statement) Copy the dictionary file from above link to the microSD card.
    • You don't need to copy AquesTalkTTS files. They are included in this library.
  • Write below to open avatar mouth according to the audio output.

#include <AquesTalkTTS.h>
#include <M5Stack.h>
#include <Avatar.h>
#include <tasks/LipSync.h>

using namespace m5avatar;

// AquesTalk License Key
// NULL or wrong value is just ignored
const char* AQUESTALK_KEY = "XXXX-XXXX-XXXX-XXXX";
Avatar avatar;

void setup() {
  int iret;
  M5.begin();
  // For Kanji-to-speech mode (requires dictionary file saved on microSD)
  // iret = TTS.createK(AQUESTALK_KEY);
  iret = TTS.create(AQUESTALK_KEY);
  avatar.init();
  avatar.addTask(lipSync, "lipSync");
}

void loop() {
  M5.update();
  if (M5.BtnA.wasPressed()) {
    // For Kanji-to-speech mode
    // TTS.play("こんにちは。", 80);
    TTS.play("konnichiwa", 80);
  }
}

Further usage

see examples directory.

You can’t perform that action at this time.