Skip to content

hejhdiss/embml

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 
 
 

Repository files navigation

embml

Embedded Machine Learning Library — pure C, zero dependencies, no malloc.

Designed for mid-range MCUs: ESP32, STM32F4/F7, RP2040, Arduino Mega, and anything with 128 KB+ SRAM and a hardware FPU. All source files live in src/. Drop them into your project or install as an Arduino library.

Generated with Claude Sonnet 4.5 by @hejhdiss


License

MIT — see LICENSE


Files in src/

File What it does
embml_config.h Global types (embml_float_t), status codes, activation helpers
embml_linear.c/.h Online linear regression via SGD
embml_logistic.c/.h Binary logistic regression via SGD
embml_lms.c/.h LMS and Normalised LMS (NLMS) adaptive filter
embml_rls.c/.h Recursive Least Squares with forgetting factor
embml_iqr.c/.h Incremental QR decomposition (Givens rotations, numerically robust)
embml_nn.c/.h Compact feedforward MLP — backprop, Xavier init, gradient clipping
embml_gru.c/.h Minimal GRU cell for time-series inference
embml_esn.c/.h Echo State Network — fixed reservoir, RLS-trained readout
embml.h Umbrella header — include this one

Quick Start

Include the umbrella header and provide your own buffers — the library never calls malloc.

#include "embml.h"

Examples

Linear Regression (SGD)

#define N_FEAT 4

float weights[N_FEAT];
LinearModel model;

linear_init(&model, N_FEAT, 0.01f, weights);

// Training — one sample at a time
float x[] = {1.0f, 2.0f, 0.5f, 3.1f};
float y   = 7.4f;
linear_update(&model, x, y);

// Inference
float prediction = linear_predict(&model, x);

Logistic Regression (Binary Classification)

#define N_FEAT 4

float weights[N_FEAT];
LogisticModel model;

logistic_init(&model, N_FEAT, 0.01f, weights);

float x[]   = {0.2f, 1.5f, -0.3f, 0.8f};
float label = 1.0f;          // 0.0 or 1.0
logistic_update(&model, x, label);

uint8_t cls  = logistic_classify(&model, x);   // 0 or 1
float   prob = logistic_predict(&model, x);    // probability in (0,1)

LMS Adaptive Filter

#define N 8

float weights[N];
LMSModel model;

// Plain LMS
lms_init(&model, N, 0.01f, weights);

// Or Normalised LMS (stable, no lr tuning)
lms_init_nlms(&model, N, 0.5f, 1e-6f, weights);

float x[N] = { /* sensor readings */ };
lms_update(&model, x, target);
float yhat = lms_predict(&model, x);

Recursive Least Squares (RLS)

#define N 6

float weights[N];
float P[N * N];
float k_scratch[N];
RLSModel model;

rls_init(&model, N, 0.98f, 1000.0f, weights, P);

float x[N] = { /* features */ };
rls_update(&model, x, y_target, k_scratch);
float yhat = rls_predict(&model, x);

Incremental QR (most numerically stable least-squares)

#define N 6

float R[N * N], f[N];
float w[N], scratch[2 * N];
IQRModel model;

iqr_init(&model, N, 0.98f, 1e-4f, R, f);

float x[N] = { /* features */ };
iqr_update(&model, x, y_target, scratch);

// Solve for weights after collecting enough samples
iqr_solve(&model, w, scratch);
float yhat = iqr_predict(w, x, N);

Feedforward Neural Network (MLP)

#define L0 4   // inputs
#define L1 8   // hidden neurons
#define L2 1   // outputs

float W0[L1*L0], b0[L1], a1[L1], d1[L1];
float W1[L2*L1], b1[L2], a2[L2], d2[L2];
float input[L0];

NNLayer layers[2] = {
    { W0, b0, a1, d1, L0, L1, EMBML_ACT_RELU    },
    { W1, b1, a2, d2, L1, L2, EMBML_ACT_SIGMOID },
};
NNModel net;

nn_init(&net, layers, 2, input, L0, 0.01f, 1.0f);

float x[L0]      = {1.0f, 0.5f, -0.2f, 0.8f};
float target[L2] = {1.0f};

// Train
nn_train_sample(&net, x, target);

// Inference only
const embml_float_t *out = nn_forward(&net, x);

GRU Cell (Time-Series Inference)

#define X_SZ  4
#define H_SZ  8

// Weights loaded from flash (trained offline)
float Wz[H_SZ*X_SZ], Wr[H_SZ*X_SZ], Wn[H_SZ*X_SZ];
float Uz[H_SZ*H_SZ], Ur[H_SZ*H_SZ], Un[H_SZ*H_SZ];
float bz[H_SZ], br[H_SZ], bn[H_SZ];
float h_state[H_SZ];
float scratch[3 * H_SZ];

GRUCell cell;
gru_init(&cell, X_SZ, H_SZ,
         Wz, Wr, Wn, Uz, Ur, Un,
         bz, br, bn, h_state, scratch);

// Call once per time step
float x_t[X_SZ] = { /* sensor sample */ };
gru_step(&cell, x_t);

// Hidden state in cell.h[0..H_SZ-1]

Echo State Network (Reservoir Computing)

#define X_SZ  4
#define H_SZ 32
#define Y_SZ  1

// Fixed reservoir weights (random, stored in flash)
const float W_in[H_SZ * X_SZ]  = { /* ... */ };
const float W_res[H_SZ * H_SZ] = { /* ... */ };

float W_out[Y_SZ * H_SZ];
float state[H_SZ], scratch[H_SZ];
float P[H_SZ * H_SZ], k[H_SZ];

ESNModel esn;
RLSModel rls;

esn_init(&esn, X_SZ, H_SZ, Y_SZ,
         W_in, W_res, 0.9f,
         state, scratch, W_out);
esn_rls_init(&esn, &rls, 0.98f, 1000.0f, P, k);

// Online training
float x[X_SZ] = { /* input */ };
float y[Y_SZ] = { /* target */ };
esn_update_state(&esn, x);
esn_rls_update(&esn, y);

// Inference
float y_out[Y_SZ];
esn_update_state(&esn, x);
esn_predict(&esn, y_out);

Design Principles

  • Pure C99 — compiles on GCC-ARM, SDCC, AVR toolchain, MSVC
  • Zero dynamic allocation — no malloc, no free; caller owns all buffers
  • Dependencies: stdint.h, string.h, math.h only
  • embml_float_t defaults to float; define EMBML_USE_DOUBLE to switch
  • Arduino compatiblearchitectures=*, all files flat in src/

Generated with Claude Sonnet 4.5 · @hejhdiss

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Packages

 
 
 

Contributors