Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Seeking instructions for usage. opencv perspective_transform function #530

Open
qq351469076 opened this issue Dec 25, 2023 · 4 comments
Open

Comments

@qq351469076
Copy link

qq351469076 commented Dec 25, 2023

This is a Python video course about OpenCV.
image

When I understand its usage
image

and i ask for chatgpt3.5, it tell me
image

I also try to imitate the first parameter
image

it raise a error
image

image

my question is

  1. I don't know what the first parameter of Rust is like
  2. There is no mask in the Python tutorial. In Rust, no parameters like Perspective_transform_def can ignore the mask. In this function, I don't know mask meaning.
opencv = "0.88.5"

my code and test picture

Locate image A in image B, and then draw lines around image A in image B.

The effect chart is as follows
image

use opencv::calib3d::{find_homography, find_homography_1, find_homography_def, RANSAC};
use opencv::core::{no_array, perspective_transform, Point, Point2f, Scalar, Size};
use opencv::features2d::{
    draw_keypoints_def, draw_matches_def, draw_matches_knn_def, BFMatcher, FlannBasedMatcher, ORB,
    SIFT,
};
use opencv::flann::{IndexParams, SearchParams, FLANN_INDEX_KDTREE};
use opencv::highgui::{imshow, wait_key};
use opencv::imgcodecs::imread_def;
use opencv::imgproc::{
    cvt_color_def, get_perspective_transform_def, polylines_def, COLOR_BGR2GRAY,
};
use opencv::prelude::*;
use opencv::types::{
    PtrOfIndexParams, PtrOfSearchParams, VectorOfDMatch, VectorOfKeyPoint, VectorOfPoint,
    VectorOfPoint2f, VectorOfVectorOfDMatch, VectorOfVectorOfPoint2f,
};
use opencv::xfeatures2d::SURF;
use std::process::exit;

/// 单应型矩阵
///
/// 一个图片在不同视角有不同维度, 经过某一点可计算出另一点的位置
fn dan_ying_xing_nv_zhen() -> opencv::Result<()> {
    let src_mat = imread_def("C:\\Users\\Administrator\\Desktop\\opencv_search.png")?;
    let mut dst_mat = imread_def("C:\\Users\\Administrator\\Desktop\\opencv_orig.png")?;

    // sift need Grayscale conversion.
    let mut src_gray = Mat::default();
    cvt_color_def(&src_mat, &mut src_gray, COLOR_BGR2GRAY)?;
    let mut dst_gray = Mat::default();
    cvt_color_def(&dst_mat, &mut dst_gray, COLOR_BGR2GRAY)?;

    // create sift object
    let mut sift = SIFT::create_def()?;

    // keypoint src and dst
    let mut key_point_src = VectorOfKeyPoint::new();
    let mut key_point_dst = VectorOfKeyPoint::new();
    //describe children
    let mut descriptors_src = Mat::default();
    let mut descriptors_dst = Mat::default();

    sift.detect_and_compute_def(
        &src_gray,
        &Mat::default(),
        &mut key_point_src,
        &mut descriptors_src,
    )?;
    sift.detect_and_compute_def(
        &dst_gray,
        &Mat::default(),
        &mut key_point_dst,
        &mut descriptors_dst,
    )?;

    // create machter
    let mut index_params = IndexParams::default()?;
    index_params.set_algorithm(FLANN_INDEX_KDTREE)?;
    index_params.set_int("trees", 5)?;
    let index_params = PtrOfIndexParams::new(index_params);

    let search_params = SearchParams::new_1(50, 0.0, true)?;
    let search_params = PtrOfSearchParams::new(search_params);

    let flann = FlannBasedMatcher::new(&index_params, &search_params)?;

    let mut best_match = VectorOfVectorOfDMatch::new();
    let k = 2; // Finding the optimal two points.

    // This line is valid until here.
    flann.knn_train_match_def(&descriptors_src, &descriptors_dst, &mut best_match, k)?;

    // Filtering good key points.
    let mut result = VectorOfVectorOfDMatch::new();

    for line in &best_match {
        let mut list = VectorOfDMatch::new();

        for singe in line {
            // The lower the value, the higher the similarity.
            if singe.distance < 0.7 {
                list.push(singe);
            }
        }

        result.push(list);
    }

    if best_match.len() >= 4 {
        let mut src_pts = VectorOfPoint2f::new();
        let mut dst_pts = VectorOfPoint2f::new();
        for key_point in best_match {
            for elem in key_point {
                let query_idx = key_point_src.get(elem.query_idx as usize)?;
                src_pts.push(query_idx.pt());

                let train_idx = key_point_dst.get(elem.train_idx as usize)?;
                dst_pts.push(train_idx.pt());
            }
        }

        // Random sampling   also is 5
        let mut h = find_homography(&src_pts, &dst_pts, &mut no_array(), RANSAC, 5f64)?;

        let weight = h.size()?.width;
        let height = h.size()?.height;

        let mut pts = VectorOfPoint2f::new();
        pts.push(Point2f::new(0f32, 0f32));
        pts.push(Point2f::new(0f32, (height - 1) as f32));
        pts.push(Point2f::new((weight - 1) as f32, (height - 1) as f32));
        pts.push(Point2f::new((weight - 1) as f32, 0f32));

        // This line throws an error
        perspective_transform(&pts, &mut h, &no_array())?;

        // polylines_def(&mut dst_mat, &pts, true, Scalar::from((0, 0, 255)))?;
        //
        // // 绘制关键点
        // let mut net_mat = Mat::default();
        // draw_matches_knn_def(
        //     &src_mat,
        //     &key_point_src,
        //     &dst_mat,
        //     &key_point_dst,
        //     &result,
        //     &mut net_mat,
        // )?;
        //
        // imshow("ssd", &h)?;

        /// wait_key(100000)?;
    } else {
        println!("array len must >=4");
        exit(0)
    }

    Ok(())
}

fn main() -> opencv::Result<()> {
    dan_ying_xing_nv_zhen()?;

    Ok(())
}

opencv_orig.png
opencv_orig

opencv_search.png
opencv_search

@qq351469076 qq351469076 changed the title ask for opencv perspective_transform function Seeking instructions for usage. opencv perspective_transform function Dec 25, 2023
@mdenty
Copy link

mdenty commented Jan 26, 2024

Hello,

The 2 first parameters are Vector<Point2f>.

For the mask, I use a default Mat.
See the following code:

    let mut src_points: Vector<Point2f> = Vector::new();
    let mut dst_points: Vector<Point2f> = Vector::new();
  
// populate the vectors of points here, usually from the keyPoints.

    let mut mask = Mat::default();
    trace!("Ransac threshold {}", self.model.ransac_threshold);
    let m = find_homography(
        &dst_points,
        &src_points,
        &mut mask,
        RANSAC,
        self.model.ransac_threshold,
    )?;

Note: I've inverted dst_points and src_points because my use case need this.

@qq351469076
Copy link
Author

你好,

第 2 个参数是Vector<Point2f>

对于掩码,我使用默认的Mat。 请看下面的代码:

    let mut src_points: Vector<Point2f> = Vector::new();
    let mut dst_points: Vector<Point2f> = Vector::new();
  
// populate the vectors of points here, usually from the keyPoints.

    let mut mask = Mat::default();
    trace!("Ransac threshold {}", self.model.ransac_threshold);
    let m = find_homography(
        &dst_points,
        &src_points,
        &mut mask,
        RANSAC,
        self.model.ransac_threshold,
    )?;

注意:我已经彻底明白了dst_pointssrc_points因为我的例子需要这个。

Do you know how to use perspective_transform of this library?

@mdenty
Copy link

mdenty commented Jan 28, 2024

Actually I never used perspective_transform myself.

I use warp_perspective instead, like this:

            let mut result = Mat::default();
            warp_perspective(
                &mat,
                &mut result,
                &m,
                Size::new(self.model.model_width, self.model.model_height),
                INTER_LANCZOS4,
                BORDER_CONSTANT,
                Scalar::from(255.0),
            )?;

Where mat is the image source and m is the "result" of find_homography.

@twistedfall
Copy link
Owner

@qq351469076 In you original message there seem to be a confusion between getPerspectiveTransform and perspectiveTransform functions. At least in the initial Python code the function that's used is perspectiveTransform, but in ChatGPT suggested C++ it's getPerspectiveTransform.

If you look at the docs for perspectiveTransform function: https://docs.opencv.org/4.x/d2/de8/group__core__array.html#gad327659ac03e5fd6894b90025e6900a7 you can see that Python form has the following signature cv.perspectiveTransform( src, m[, dst]) -> dst and the argument order is different from C++/Rust so in the Rust code that H should actually be the last argument and the second is the output Mat.

Also notice that the Python code calls reshape and the docs for the perspectiveTransform indicate which particular shape the function expects for its src argument.

It would be helpful if you could provide the working code in Python then it would be easier to help you translate it to Rust.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants