Skip to content
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .github/workflows/build-opencv.yml
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,7 @@ jobs:
- name: Build opencv.js
run: |
source emsdk/emsdk_env.sh
emcmake python opencv/platforms/js/build_js.py build_js --build_flags="-s WASM_ASYNC_COMPILATION=0"
emcmake python opencv/platforms/js/build_js.py build_js --cmake_option="-DOPENCV_ENABLE_NONFREE=ON" --build_flags="-s WASM_ASYNC_COMPILATION=0"

- name: Upload opencv_js
uses: actions/upload-artifact@v4
Expand Down
43 changes: 43 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -89,6 +89,49 @@ module.exports = {

The TypeScript type declarations may not be up to date with the latest OpenCV.js. Refer to [cvKeys.json](doc/cvKeys.json) to check the available methods and properties at runtime.

# Feature Detection (SIFT, SURF, etc.)

This package now includes TypeScript definitions for SIFT (Scale-Invariant Feature Transform) and other feature detection algorithms.

**Note**: SIFT and SURF require building OpenCV with `OPENCV_ENABLE_NONFREE=ON`. The pre-built opencv.js included in this package is now built with NONFREE algorithms enabled.

## Using SIFT

```js
import cvModule from "@techstark/opencv-js";

async function detectFeatures() {
const { cv } = await getOpenCv();

// Create SIFT detector
const sift = new cv.SIFT();

// Load image (example with grayscale mat)
const img = cv.imread(imageElement);
const gray = new cv.Mat();
cv.cvtColor(img, gray, cv.COLOR_RGBA2GRAY);

// Detect keypoints and compute descriptors
const keypoints = new cv.KeyPointVector();
const descriptors = new cv.Mat();
sift.detectAndCompute(gray, new cv.Mat(), keypoints, descriptors);

console.log(`Found ${keypoints.size()} keypoints`);

// Don't forget to delete objects to free memory
keypoints.delete();
descriptors.delete();
sift.delete();
gray.delete();
img.delete();
}
```

Other available feature detectors include:
- `cv.ORB()` - Oriented FAST and Rotated BRIEF
- `cv.AKAZE()` - Accelerated-KAZE
- `cv.SIFT()` - Scale-Invariant Feature Transform (requires NONFREE)

# Star History

[![Star History Chart](https://api.star-history.com/svg?repos=techstark/opencv-js&type=Date)](https://star-history.com/#techstark/opencv-js&Date)
24 changes: 17 additions & 7 deletions src/types/opencv/BackgroundSubtractor.ts
Original file line number Diff line number Diff line change
@@ -1,9 +1,15 @@
import type { Algorithm, bool, double, InputArray, OutputArray } from "./_types";
import type {
Algorithm,
bool,
double,
InputArray,
OutputArray,
} from "./_types";

/**
* Base class for background/foreground segmentation algorithms.
*
* The class is only used to define the common interface for the whole family of background/foreground
* The class is only used to define the common interface for the whole family of background/foreground
* segmentation algorithms.
*
* Source:
Expand All @@ -17,20 +23,24 @@ export declare class BackgroundSubtractor extends Algorithm {
*
* @param image Next video frame.
* @param fgmask The output foreground mask as an 8-bit binary image.
* @param learningRate The value between 0 and 1 that indicates how fast the background model is learnt.
* @param learningRate The value between 0 and 1 that indicates how fast the background model is learnt.
* Negative parameter value makes the algorithm use some automatically chosen learning rate.
* 0 means that the background model is not updated at all, 1 means that the background model is
* 0 means that the background model is not updated at all, 1 means that the background model is
* completely reinitialized from the last frame.
*/
public apply(image: InputArray, fgmask: OutputArray, learningRate?: double): void;
public apply(
image: InputArray,
fgmask: OutputArray,
learningRate?: double,
): void;

/**
* Computes a background image.
*
* @param backgroundImage The output background image.
*
* @note Sometimes the background image can be very blurry, as it contain the average background
* @note Sometimes the background image can be very blurry, as it contain the average background
* statistics.
*/
public getBackgroundImage(backgroundImage: OutputArray): void;
}
}
16 changes: 10 additions & 6 deletions src/types/opencv/BackgroundSubtractorMOG2.ts
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@ import type { BackgroundSubtractor, bool, double, int } from "./_types";
/**
* Gaussian Mixture-based Background/Foreground Segmentation Algorithm.
*
* The class implements the Gaussian mixture model background subtraction described in [Zivkovic2004]
* The class implements the Gaussian mixture model background subtraction described in [Zivkovic2004]
* and [Zivkovic2006].
*
* Source:
Expand All @@ -12,11 +12,15 @@ import type { BackgroundSubtractor, bool, double, int } from "./_types";
export declare class BackgroundSubtractorMOG2 extends BackgroundSubtractor {
/**
* @param history Length of the history.
* @param varThreshold Threshold on the squared Mahalanobis distance between the pixel and the model
* to decide whether a pixel is well described by the background model. This parameter does not
* @param varThreshold Threshold on the squared Mahalanobis distance between the pixel and the model
* to decide whether a pixel is well described by the background model. This parameter does not
* affect the background update.
* @param detectShadows If true, the algorithm will detect shadows and mark them. It decreases the
* @param detectShadows If true, the algorithm will detect shadows and mark them. It decreases the
* speed a bit, so if you do not need this feature, set the parameter to false.
*/
public constructor(history?: int, varThreshold?: double, detectShadows?: bool);
}
public constructor(
history?: int,
varThreshold?: double,
detectShadows?: bool,
);
}
31 changes: 31 additions & 0 deletions src/types/opencv/SIFT.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,31 @@
import type { Feature2D, float, int } from "./_types";

/**
* Class for extracting keypoints and computing descriptors using the Scale Invariant Feature Transform (SIFT) algorithm.
* https://docs.opencv.org/4.12.0/d7/d60/classcv_1_1SIFT.html
*
* Note: SIFT is a patented algorithm that was made free in 2020 when the patent expired.
* To use SIFT, you need to build OpenCV with OPENCV_ENABLE_NONFREE=ON.
*/
export declare class SIFT extends Feature2D {
/**
* Creates a new SIFT feature detector and descriptor extractor.
* @param nfeatures The number of best features to retain. The features are ranked by their scores (measured in SIFT algorithm as the local contrast)
* @param nOctaveLayers The number of layers in each octave. 3 is the value used in D. Lowe paper. The number of octaves is computed automatically from the image resolution.
* @param contrastThreshold The contrast threshold used to filter out weak features in semi-uniform (low-contrast) regions. The larger the threshold, the less features are produced by the detector.
* @param edgeThreshold The threshold used to filter out edge-like features. Note that the its meaning is different from the contrastThreshold, i.e. the larger the edgeThreshold, the less features are filtered out (more features are retained).
* @param sigma The sigma of the Gaussian applied to the input image at the octave #0. If your image is captured with a weak camera with soft lenses, you might want to reduce the number.
*/
public constructor(
nfeatures?: int,
nOctaveLayers?: int,
contrastThreshold?: float,
edgeThreshold?: float,
sigma?: float,
);

/**
* Returns the descriptor size in floats (128)
*/
public getDefaultName(): string;
}
1 change: 1 addition & 0 deletions src/types/opencv/_types.ts
Original file line number Diff line number Diff line change
Expand Up @@ -42,6 +42,7 @@ export * from "./ORB";
export * from "./PCA";
export * from "./photo_inpaint";
export * from "./RotatedRect";
export * from "./SIFT";
export * from "./softdouble";
export * from "./softfloat";
export * from "./video_track";
Expand Down
2 changes: 1 addition & 1 deletion src/types/opencv/imgproc_colormap.ts
Original file line number Diff line number Diff line change
Expand Up @@ -57,4 +57,4 @@ export declare const COLORMAP_CIVIDIS: ColormapTypes; // initializer: = 17
export declare const COLORMAP_TWILIGHT: ColormapTypes; // initializer: = 18
export declare const COLORMAP_TWILIGHT_SHIFTED: ColormapTypes; // initializer: = 19
export declare const COLORMAP_TURBO: ColormapTypes; // initializer: = 20
export declare const COLORMAP_DEEPGREEN: ColormapTypes; // initializer: = 21
export declare const COLORMAP_DEEPGREEN: ColormapTypes; // initializer: = 21
96 changes: 96 additions & 0 deletions test/SIFT.test.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,96 @@
import { Jimp } from "jimp";
import path from "path";
import { setupOpenCv, translateException } from "./cv";

beforeAll(setupOpenCv);

describe("SIFT", () => {
it("should have SIFT type definitions available", () => {
// This test verifies that TypeScript definitions exist
// The actual SIFT functionality requires opencv.js built with OPENCV_ENABLE_NONFREE=ON
expect(typeof cv.SIFT).toBeDefined();
});

// This test will only run if SIFT is available in the opencv.js build
it.skip("should detect keypoints using SIFT", async () => {
try {
// Skip if SIFT is not available
if (typeof cv.SIFT === "undefined") {
console.log(
"SIFT not available - requires opencv.js built with OPENCV_ENABLE_NONFREE=ON",
);
return;
}

// Load test image
const jimpSrc = await Jimp.read(path.resolve(__dirname, "Lenna.png"));
const img = cv.matFromImageData(jimpSrc.bitmap);

// Convert to grayscale
const gray = new cv.Mat();
cv.cvtColor(img, gray, cv.COLOR_RGBA2GRAY);

// Create SIFT detector
const sift = new cv.SIFT();

// Detect keypoints and compute descriptors
const keypoints = new cv.KeyPointVector();
const descriptors = new cv.Mat();

sift.detectAndCompute(gray, new cv.Mat(), keypoints, descriptors);

// Verify results
expect(keypoints.size()).toBeGreaterThan(0);
expect(descriptors.rows).toBe(keypoints.size());
expect(descriptors.cols).toBe(128); // SIFT descriptors are 128-dimensional

console.log(
`SIFT detected ${keypoints.size()} keypoints with ${descriptors.cols}-dimensional descriptors`,
);

// Clean up
keypoints.delete();
descriptors.delete();
sift.delete();
gray.delete();
img.delete();
} catch (err) {
throw translateException(err);
}
});

it.skip("should create SIFT with custom parameters", async () => {
try {
// Skip if SIFT is not available
if (typeof cv.SIFT === "undefined") {
console.log(
"SIFT not available - requires opencv.js built with OPENCV_ENABLE_NONFREE=ON",
);
return;
}

// Create SIFT with custom parameters
const nfeatures = 100; // Retain top 100 features
const nOctaveLayers = 3;
const contrastThreshold = 0.04;
const edgeThreshold = 10;
const sigma = 1.6;

const sift = new cv.SIFT(
nfeatures,
nOctaveLayers,
contrastThreshold,
edgeThreshold,
sigma,
);

expect(sift).toBeDefined();
expect(sift.getDefaultName()).toBe("Feature2D.SIFT");

// Clean up
sift.delete();
} catch (err) {
throw translateException(err);
}
});
});