This Repository is for basic computer vision(Detection, Recognition, Tracking).
Honestly, This Repository is for reference when I develop something (about CV)
- Basic process
- Filltering
- Geometric transform
- Extract feature
- ImageSegmentation ObjectDetection
- feature-point(keypoints) Detect and match
- Object tracking and Motion vector
- (+) Binarization
cv2.add(src1, src2[, dst[, mask[, dtype]]]) -> dst
cv2.subtract(src1, src2[, dst[, mask[, dtype]]]) -> dst
cv2.multiply(src1, src2[, dst[, scale[, dtype]]]) -> dst
cv2.divide(src1, src2[, dst[, scale[, dtype]]]) -> dst
cv2.calcHist(images, channels, mask, histSize, ranges[, hist[, accumulate]]) -> hist
cv2.equalizeHist(src[, dst]) -> dst
cv2.compareHist(H1, H2, method) -> retval
cv2.warpAffine(src, M, dsize[, dst[, flags[, borderMode[, borderValue]]]]) -> dst
cv2.getRotationMatrix2D(center, angle, scale) -> retval
cv2.getAffineTransform(src, dst) -> retval
cv2.getPerspectiveTransform(src, dst[, solveMethod]) -> retval
cv2.remap(src, map1, map2, interpolation[, dst[, borderMode[, borderValue]]]) -> dst
Mean filter is a simple filter that is usually used for blurring.
It is also called box filter.
cv2.blur(src, ksize[, dst[, anchor[, borderType]]]) -> dst
Gaussian filter is a filter that is usually used for blurring.
Different from mean filter, it is more effective for blurring.
cv2.GaussianBlur(src, ksize, sigmaX[, dst[, sigmaY[, borderType]]]) -> dst
cv2.addWeighted(src1, alpha, src2, beta, gamma[, dst[, dtype]]) -> dst
cv2.Laplacian(src, ddepth[, dst[, ksize[, scale[, delta[, borderType]]]]]) -> dst
cv2.medianBlur(src, ksize[, dst]) -> dst
cv2.bilateralFilter(src, d, sigmaColor, sigmaSpace[, dst[, borderType]]) -> dst
Useful for image blending, image resizing, image compression, image reconstruction
cv2.pyrUp(src[, dst[, dstsize[, borderType]]]) -> dst
cv2.pyrDown(src[, dst[, dstsize[, borderType]]]) -> dst
cv2.Sobel(src, ddepth, dx, dy[, dst[, ksize[, scale[, delta[, borderType]]]]]) -> dst
cv2.Laplacian(src, ddepth[, dst[, ksize[, scale[, delta[, borderType]]]]]) -> dst
-
Compute x and y derivatives of image
-
Compute magnitude of gradient at every pixel
-
Eliminate pixels that are not local maxima of gradient magnitude
-
Hysteresis thresholding
-
Select the pixels such That the gradient magnitude is larger than a high threshold
-
Select the pixels such that the gradient magnitude is larger than a low threshold and that are connected to high threshold pixels
cv2.Canny(image, threshold1, threshold2[, edges[, apertureSize[, L2gradient]]]) -> edges
cv2.HoughLines(image, rho, theta, threshold[, lines[, srn[, stn[, min_theta[, max_theta]]]]]) -> lines
cv2.HoughCircles(image, method, dp, minDist[, circles[, param1[, param2[, minRadius[, maxRadius]]]]]) -> circles
cv2.grabCut(img, mask, rect, bgdModel, fgdModel, iterCount[, mode]) -> mask, bgdModel, fgdModel
cv2.moments(array[, binaryImage]) -> retval
cv2.matchTemplate(image, templ, method[, result[, mask]]) -> result
cv2.CascadeClassifier([filename]) -> <CascadeClassifier object>
cv2.HOGDescriptor([_winSize[, _blockSize[, _blockStride[, _cellSize[, _nbins[, _derivAperture[, _winSigma[, _histogramNormType[, _L2HysThreshold[, _gammaCorrection[, _nlevels[, _signedGradient]]]]]]]]]]]]) -> <HOGDescriptor object>
cv2.cornerHarris(src, blockSize, ksize, k[, dst[, borderType]]) -> dst
cv2.goodFeaturesToTrack(image, maxCorners, qualityLevel, minDistance[, corners[, mask[, blockSize[, useHarrisDetector[, k]]]]]) -> corners
-
Scale-space extrema detection: The first stage of computation searches over all scales and image locations. It is implemented efficiently by using a difference-of-Gaussian function to identify potential interest points that are invariant to scale and orientation.
-
Keypoint localization: At each candidate location, a detailed model is fit to determine location and scale. Keypoints are selected based on measures of their stability.
-
Orientation assignment: One or more orientations are assigned to each keypoint lo- cation based on local image gradient directions. All future operations are performed on image data that has been transformed relative to the assigned orientation, scale, and location for each feature, thereby providing invariance to these transformations.
-
Keypoint descriptor: The local image gradients are measured at the selected scale in the region around each keypoint. These are transformed into a representation that allows for significant levels of local shape distortion and change in illumination.
cv2.SIFT_create([, nfeatures[, nOctaveLayers[, contrastThreshold[, edgeThreshold[, sigma]]]]]) -> retval
cv2.SURF_create([, hessianThreshold[, nOctaves[, nOctaveLayers[, extended[, upright]]]]]) -> retval
cv2.ORB_create([, nfeatures[, scaleFactor[, nlevels[, edgeThreshold[, firstLevel[, WTA_K[, scoreType[, patchSize[, fastThreshold]]]]]]]]]) -> retval
cv2.BRISK_create([, thresh[, octaves[, patternScale]]]) -> retval
cv2.BriefDescriptorExtractor_create([, bytes[, use_orientation]]) -> retval
cv2.FREAK_create([, orientationNormalized[, scaleNormalized[, patternScale[, nOctaves[, selectedPairs]]]]]) -> retval
cv2.BFMatcher([, normType[, crossCheck]]) -> <BFMatcher object>
cv2.FlannBasedMatcher([, indexParams[, searchParams]]) -> <FlannBasedMatcher object>
Gaussian Mixture-based Background/Foreground Segmentation Algorithm
cv.bgsegm.createBackgroundSubtractorMOG([, history[, nmixtures[, backgroundRatio[, noiseSigma]]]]) -> retval
cv.createBackgroundSubtractorMOG2([, history[, varThreshold[, detectShadows]]]) -> retval
cv.bgsegm.createBackgroundSubtractorGMG([, initializationFrames[, decisionThreshold]]) -> retval
cv2.accumulate(src, dst[, mask]) -> dst
cv2.accumulateWeighted(src, dst, alpha[, mask]) -> dst
cv2.meanShift(probImage, window, criteria) -> retval, window
cv2.CamShift(probImage, window, criteria) -> retval, window