Showing posts with label Edge Detection. Show all posts
Showing posts with label Edge Detection. Show all posts

Edge detection using Laplacian operator

void Laplacian(InputArray src, OutputArray dst, int ddepth, int ksize=1, double scale=1, double delta=0, int borderType=BORDER_DEFAULT )

Parameters:
  • src – Source image.
  • dst – Destination image of the same size and the same number of channels as src .
  • ddepth – Desired depth of the destination image.
  • ksize – Aperture size used to compute the second-derivative filters. See getDerivKernels() for details. The size must be positive and odd.
  • scale – Optional scale factor for the computed Laplacian values. By default, no scaling is applied. See getDerivKernels() for details.
  • delta – Optional delta value that is added to the results prior to storing them in dst .
  • borderType – Pixel extrapolation method. See borderInterpolate() for details.
You can find an example in OpenCV Documentation.

Steps:

  1. Load image
  2. Remove noise by blurring with a Gaussian filter
  3. Convert to gray-scale.
  4. Apply the Laplacian operator to find the edges.(Laplacian)
  5. The resulting image needs to be converted to 8-bit image for display.(convertScaleAbs)
  6. Show the result

Functions:

Example:

------------
#include "opencv2/imgproc/imgproc.hpp"
#include "opencv2/highgui/highgui.hpp"
#include <stdlib.h>
#include <stdio.h>

using namespace cv;

int main( int argc, char** argv )
{
    Mat src, gray, dst, abs_dst;
    src = imread( "lena.jpg" );

    /// Remove noise by blurring with a Gaussian filter
    GaussianBlur( src, src, Size(3,3), 0, 0, BORDER_DEFAULT );
    cvtColor( src, gray, CV_RGB2GRAY );

    /// Apply Laplace function
    Laplacian( gray, dst, CV_16S, 3, 1, 0, BORDER_DEFAULT );
    convertScaleAbs( dst, abs_dst );
    imshow( "result", abs_dst );

    waitKey(0);
    return 0;
}  
------------

Line Detection by Hough Line Transform

void HoughLines(InputArray image, OutputArray lines, double rho, double theta, int threshold, double srn=0, double stn=0 )
Parameters:
  • image – 8-bit, single-channel binary source image (use edge detectors)
  • lines – Output vector of lines. Each line is represented by a two-element vector (\rho, \theta) . \rho is the distance from the coordinate origin (0,0) (top-left corner of the image). \theta is the line rotation angle in radians ( 0 \sim \textrm{vertical line}, \pi/2 \sim \textrm{horizontal line} ).
  • rho – Distance resolution of the accumulator in pixels.
  • theta – Angle resolution of the accumulator in radians.
  • threshold – Accumulator threshold parameter. Only those lines are returned that get enough votes (>threshold ).
  • srn – For the multi-scale Hough transform, it is a divisor for the distance resolution rho . The coarse accumulator distance resolution is rho and the accurate accumulator resolution is rho/srn . If both srn=0 and stn=0 , the classical Hough transform is used. Otherwise, both these parameters should be positive.
  • stn – For the multi-scale Hough transform, it is a divisor for the distance resolution theta.
A good example for Hough Line Transform is provided in OpenCV Documentation.

Steps:

  1. Load image and convert to gray-scale.
  2. Apply the Hough Transform to find the lines.(HoughLines)
  3. Draw the detected lines.(line)
  4. Show the result

Functions:


Example:

-------------
#include "opencv2/highgui/highgui.hpp"
#include "opencv2/imgproc/imgproc.hpp"
#include <iostream>

using namespace cv;
using namespace std;

int main()
{
    Mat src = imread("building.jpg", 0);

    Mat dst, cdst;
    Canny(src, dst, 50, 200, 3); 
    cvtColor(dst, cdst, CV_GRAY2BGR); 

    vector<Vec2f> lines;
    // detect lines
    HoughLines(dst, lines, 1, CV_PI/180, 150, 0, 0 );

    // draw lines
    for( size_t i = 0; i < lines.size(); i++ )
    {
        float rho = lines[i][0], theta = lines[i][1];
        Point pt1, pt2;
        double a = cos(theta), b = sin(theta);
        double x0 = a*rho, y0 = b*rho;
        pt1.x = cvRound(x0 + 1000*(-b));
        pt1.y = cvRound(y0 + 1000*(a));
        pt2.x = cvRound(x0 - 1000*(-b));
        pt2.y = cvRound(y0 - 1000*(a));
        line( cdst, pt1, pt2, Scalar(0,0,255), 3, CV_AA);
    }

    imshow("source", src);
    imshow("detected lines", cdst);

    waitKey();
    return 0;
}

-------------

Result:


Extra Stuffs:

  • What if you need to select some lines on the basis of prior knowledge of range of angles?
So, you know the range of angles in which your lines may present. Now, what you need to do is to add a conditional statement to filter out the lines detected in that angle range. For example,

if you want to detect vertical lines, use the following conditional statement after line 23 of the above example.
if( theta>CV_PI/180*170 || theta<CV_PI/180*10)
        { Point pt1, pt2;
        ..........
        line( cdst, pt1, pt2, Scalar(0,0,255), 3, CV_AA);
        }

if you want to detect horizontal lines, use
if( theta>CV_PI/180*80 && theta<CV_PI/180*100)
        { Point pt1, pt2;
        ..........
        line( cdst, pt1, pt2, Scalar(0,0,255), 3, CV_AA);
        }



Line Detection by Probabilistic Hough Line Transform

void HoughLinesP(InputArray image, OutputArray lines, double rho, double theta, int threshold, double minLineLength=0, double maxLineGap=0 )
Parameters:
  • image – 8-bit, single-channel binary source image. The image may be modified by the function.
  • lines – Output vector of lines. Each line is represented by a 4-element vector (x1,y1,x2,y2) , where (x1,y1) and (x2,y2) are the ending points of each detected line segment.
  • rho – Distance resolution of the accumulator in pixels.
  • theta – Angle resolution of the accumulator in radians.
  • threshold – Accumulator threshold parameter. Only those lines are returned that get enough votes ( >threshold ).
  • minLineLength – Minimum line length. Line segments shorter than that are rejected.
  • maxLineGap – Maximum allowed gap between points on the same line to link them.
A good example for Probabilistic Hough Line Transform is provided in OpenCV Documentation.

Steps:

  1. Load image and convert to gray-scale.
  2. Apply the Probabilistic Hough Transform to find the lines.(HoughLinesP)
  3. Draw the detected lines.(line)
  4. Show the result

Functions:


Example:

-------------
#include "opencv2/highgui/highgui.hpp"
#include "opencv2/imgproc/imgproc.hpp"
#include <iostream>

using namespace cv;
using namespace std;

int main()
{
    Mat src = imread("image1.jpg", 0);

    Mat dst, cdst;
    Canny(src, dst, 50, 200, 3); 
    cvtColor(dst, cdst, CV_GRAY2BGR); 

    vector<Vec4i> lines;
    // detect the lines
    HoughLinesP(dst, lines, 1, CV_PI/180, 50, 50, 10 );
    for( size_t i = 0; i < lines.size(); i++ )
    {
        Vec4i l = lines[i];
        // draw the lines
        line( cdst, Point(l[0], l[1]), Point(l[2], l[3]), Scalar(0,0,255), 3, CV_AA);
    }

    imshow("source", src);
    imshow("detected lines", cdst);

    waitKey();

    return 0;
}
-------------


Hough Circle Detection

void HoughCircles(InputArray image, OutputArray circles, int method, double dp, double minDist, double param1=100, double param2=100, int minRadius=0, int maxRadius=0 )
 Parameters:
  • image – 8-bit, single-channel, grayscale input image.
  • circles – Output vector of found circles. Each vector is encoded as a 3-element floating-point vector (x,y,radius) .
  • circle_storage – Memory storage that will contain the output sequence of found circles.
  • method – Currently, the only implemented method is CV_HOUGH_GRADIENT.
  • dp – Inverse ratio of the accumulator resolution to the image resolution. For example, if dp=1 , the accumulator has the same resolution as the input image. If dp=2 , the accumulator has half as big width and height.
  • minDist – Minimum distance between the centers of the detected circles. If the parameter is too small, multiple neighbor circles may be falsely detected in addition to a true one. If it is too large, some circles may be missed.
  • param1 – First method-specific parameter. In case of CV_HOUGH_GRADIENT , it is the higher threshold of the two passed to the Canny()edge detector (the lower one is twice smaller).
  • param2 – Second method-specific parameter. In case of CV_HOUGH_GRADIENT , it is the accumulator threshold for the circle centers at the detection stage. The smaller it is, the more false circles may be detected. Circles, corresponding to the larger accumulator values, will be returned first.
  • minRadius – Minimum circle radius.
  • maxRadius – Maximum circle radius.
 A good example for Hough Circle Transform is provided in OpenCV Documentation.

Steps:

  1. Load image and convert to gray-scale.
  2. Blur (low pass filter) the image to reduce noise.
  3. Apply the Hough Transform to find the circles.(HoughCircles)
  4. Draw the circles detected.(circle)
  5. Show the result

Functions:


Example:

-------------
#include "opencv2/highgui/highgui.hpp"
#include "opencv2/imgproc/imgproc.hpp"
#include <iostream>
#include <stdio.h>

using namespace cv;
using namespace std;

int main()
{
  Mat src, gray;
  src = imread( "building.jpg", 1 );resize(src,src,Size(640,480));
  cvtColor( src, gray, CV_BGR2GRAY );

  // Reduce the noise so we avoid false circle detection
  GaussianBlur( gray, gray, Size(9, 9), 2, 2 );

  vector<Vec3f> circles;

  // Apply the Hough Transform to find the circles
  HoughCircles( gray, circles, CV_HOUGH_GRADIENT, 1, 30, 200, 50, 0, 0 );

  // Draw the circles detected
  for( size_t i = 0; i < circles.size(); i++ )
  {
      Point center(cvRound(circles[i][0]), cvRound(circles[i][1]));
      int radius = cvRound(circles[i][2]);     
      circle( src, center, 3, Scalar(0,255,0), -1, 8, 0 );// circle center     
      circle( src, center, radius, Scalar(0,0,255), 3, 8, 0 );// circle outline
      cout << "center : " << center << "\nradius : " << radius << endl;
   }

  // Show your results
  namedWindow( "Hough Circle Transform Demo", CV_WINDOW_AUTOSIZE );
  imshow( "Hough Circle Transform Demo", src );

  waitKey(0);
  return 0;
}

-------------

 Result:

 

Note:
  • If this code is not able to detect the circles that you want to detect, then play with the parameters (dp, minDist, param1 & param2).
  • Keeping rest parameters constant, if you increase dp, then increase param2 too to avoid false detection.

Harris Corner Detection

void cornerHarris(InputArray src, OutputArray dst, int blockSize, int ksize, double k, int borderType=BORDER_DEFAULT ) Parameters:
  • src – Input single-channel 8-bit or floating-point image.
  • dst – Image to store the Harris detector responses. It has the type CV_32FC1 and the same size as src .
  • blockSize – Neighborhood size 
  • ksize – Aperture parameter for the Sobel() operator.
  • k – Harris detector free parameter. See the formula below.
  • borderType – Pixel extrapolation method.
For each pixel (x,y) it calculates a 2x2 gradient covariance matrix M(x,y) over a "blocksize x blocksize" neighborhood. Then, it computes the following characteristic:

\texttt{dst} (x,y) =  \mathrm{det} M^{(x,y)} - k  \cdot \left ( \mathrm{tr} M^{(x,y)} \right )^2
Corners in the image can be found as the local maxima of this response map.

A good example for Harris Corner Detection is provided in OpenCV Documentation.

Functions:


Example:

------------------
#include "opencv2/highgui/highgui.hpp"
#include "opencv2/imgproc/imgproc.hpp"
#include <iostream>
#include <stdio.h>
#include <stdlib.h>

using namespace cv;
using namespace std;

int thresh = 200;

int main( )
{
    Mat src, gray;
    // Load source image and convert it to gray
    src = imread( "lena.jpg", 1 );
    cvtColor( src, gray, CV_BGR2GRAY );
    Mat dst, dst_norm, dst_norm_scaled;
    dst = Mat::zeros( src.size(), CV_32FC1 );

    // Detecting corners
    cornerHarris( gray, dst, 7, 5, 0.05, BORDER_DEFAULT );

    // Normalizing
    normalize( dst, dst_norm, 0, 255, NORM_MINMAX, CV_32FC1, Mat() );
    convertScaleAbs( dst_norm, dst_norm_scaled );

    // Drawing a circle around corners
    for( int j = 0; j < dst_norm.rows ; j++ )
    { for( int i = 0; i < dst_norm.cols; i++ )
    {
        if( (int) dst_norm.at<float>(j,i) > thresh )
        {
           circle( dst_norm_scaled, Point( i, j ), 5,  Scalar(0), 2, 8, 0 );
        }
    }
    }


    // Showing the result
    namedWindow( "corners_window", CV_WINDOW_AUTOSIZE );
    imshow( "corners_window", dst_norm_scaled );

    waitKey(0);
    return(0);
}
------------------
Result:

Canny Edge Detection

void Canny(InputArray image, OutputArray edges, double threshold1, double threshold2, int apertureSize=3, bool L2gradient=false )
Parameters:
  • image – single-channel 8-bit input image.
  • edges – output edge map; it has the same size and type as image .
  • threshold1 – first threshold for the hysteresis procedure.
  • threshold2 – second threshold for the hysteresis procedure.
  • apertureSize – aperture size for the Sobel() operator.
  • L2gradient – a flag, indicating whether a more accurate L_2 norm =\sqrt{(dI/dx)^2 + (dI/dy)^2} should be used to calculate the image gradient magnitude ( L2gradient=true ), or whether the default L_1 norm =|dI/dx|+|dI/dy| is enough ( L2gradient=false ).
 A good example for Canny Edge Detection is provided in OpenCV Documentation..

Functions:

Example:

-------------
#include "opencv2/core/core.hpp"
#include "opencv2/highgui/highgui.hpp"
#include "opencv2/imgproc/imgproc.hpp"
#include "iostream"

using namespace cv;
using namespace std;

int main( )
{
    Mat src1;
    src1 = imread("lena.jpg", CV_LOAD_IMAGE_COLOR);
    namedWindow( "Original image", CV_WINDOW_AUTOSIZE );
    imshow( "Original image", src1 );

    Mat gray, edge, draw;
    cvtColor(src1, gray, CV_BGR2GRAY);

    Canny( gray, edge, 50, 150, 3);

    edge.convertTo(draw, CV_8U);
    namedWindow("image", CV_WINDOW_AUTOSIZE);
    imshow("image", draw);

    waitKey(0);                                       
    return 0;
} 

-------------

Result:

Sobel Edge Detection

void Sobel(InputArray src, OutputArray dst, int ddepth, int dx, int dy, int ksize=3, double scale=1, double delta=0, int borderType=BORDER_DEFAULT )

Parameters:
  • src – input image.
  • dst – output image of the same size and the same number of channels as src .
  • ddepth
    output image depth; the following combinations of src.depth() and ddepth are supported:
    • src.depth() = CV_8U, ddepth = -1/CV_16S/CV_32F/CV_64F
    • src.depth() = CV_16U/CV_16S, ddepth = -1/CV_32F/CV_64F
    • src.depth() = CV_32F, ddepth = -1/CV_32F/CV_64F
    • src.depth() = CV_64F, ddepth = -1/CV_64F
    when ddepth=-1, the destination image will have the same depth as the source; in the case of 8-bit input images it will result in truncated derivatives.
  • xorder – order of the derivative x.
  • yorder – order of the derivative y.
  • ksize – size of the extended Sobel kernel; it must be 1, 3, 5, or 7.
  • scale – optional scale factor for the computed derivative values; by default, no scaling is applied (see getDerivKernels() for details).
  • delta – optional delta value that is added to the results prior to storing them in dst.
  • borderType – pixel extrapolation method (see borderInterpolate() for details).

Functions:

 
This is a code from OpenCV documentation. I have made some changes to it.

Example:

--------------
#include "opencv2/core/core.hpp"
#include "opencv2/highgui/highgui.hpp"
#include "opencv2/imgproc/imgproc.hpp"
#include "iostream"

using namespace cv;
using namespace std;

int main( )
{
    Mat src1;
    src1 = imread("lena.jpg", CV_LOAD_IMAGE_COLOR);
    namedWindow( "Original image", CV_WINDOW_AUTOSIZE );
    imshow( "Original image", src1 );

    Mat grey;
    cvtColor(src1, grey, CV_BGR2GRAY);

    Mat sobelx;
    Sobel(grey, sobelx, CV_32F, 1, 0);

    double minVal, maxVal;
    minMaxLoc(sobelx, &minVal, &maxVal); //find minimum and maximum intensities
    cout << "minVal : " << minVal << endl << "maxVal : " << maxVal << endl;

    Mat draw;
    sobelx.convertTo(draw, CV_8U, 255.0/(maxVal - minVal), -minVal * 255.0/(maxVal - minVal));

    namedWindow("image", CV_WINDOW_AUTOSIZE);
    imshow("image", draw);

    waitKey(0);                                        
    return 0;
} 

--------------
Result:


Threshold operation


double threshold(InputArray src, OutputArray dst, double thresh, double maxval, int type)
Applies a fixed-level threshold to each array element
Parameters:
  • src – input array (single-channel, 8-bit or 32-bit floating point).
  • dst – output array of the same size and type as src.
  • threshthreshold value.
  • maxval – maximum value to use with the THRESH_BINARY and THRESH_BINARY_INV thresholding types.
  • typethresholding type
  • THRESH_BINARY
    \texttt{dst} (x,y) =  \fork{\texttt{maxval}}{if $\texttt{src}(x,y) > \texttt{thresh}$}{0}{otherwise}
  • THRESH_BINARY_INV
    \texttt{dst} (x,y) =  \fork{0}{if $\texttt{src}(x,y) > \texttt{thresh}$}{\texttt{maxval}}{otherwise}
  • THRESH_TRUNC
    \texttt{dst} (x,y) =  \fork{\texttt{threshold}}{if $\texttt{src}(x,y) > \texttt{thresh}$}{\texttt{src}(x,y)}{otherwise}
  • THRESH_TOZERO
    \texttt{dst} (x,y) =  \fork{\texttt{src}(x,y)}{if $\texttt{src}(x,y) > \texttt{thresh}$}{0}{otherwise}
  • THRESH_TOZERO_INV
    \texttt{dst} (x,y) =  \fork{0}{if $\texttt{src}(x,y) > \texttt{thresh}$}{\texttt{src}(x,y)}{otherwise}
Find an example in OpenCV documentaion.

Steps:

  1. Load an image
  2. Create a window to display results
  3. Create Trackbar to choose type of Threshold
  4. Call the function "Threshold_Demo" to perform threshold operation.

Functions:

    Example:

    ------------
    #include "opencv2/imgproc/imgproc.hpp"
    #include "opencv2/highgui/highgui.hpp"
    #include <stdlib.h>
    #include <stdio.h>
    
    using namespace cv;
    
    int threshold_value = 0;
    int threshold_type = 3;;
    int const max_value = 255;
    int const max_type = 4;
    int const max_BINARY_value = 255;
    
    Mat src, src_gray, dst;
    char* window_name = "Threshold Demo";
    
    char* trackbar_type = "Type: \n 0: Binary \n 1: Binary Inverted \n 2: Truncate \n 3: To Zero \n 4: To Zero Inverted";
    char* trackbar_value = "Value";
    
    void Threshold_Demo( int, void* );
    
    int main( int argc, char** argv )
    {
      /// Load an image
      src = imread( "shape.jpg", 1 );
    
      /// Convert the image to Gray
      cvtColor( src, src_gray, CV_RGB2GRAY );
    
      /// Create a window to display results
      namedWindow( window_name, CV_WINDOW_AUTOSIZE );
    
      /// Create Trackbar to choose type of Threshold
      createTrackbar( trackbar_type,
                      window_name, &threshold_type,
                      max_type, Threshold_Demo );
    
      createTrackbar( trackbar_value,
                      window_name, &threshold_value,
                      max_value, Threshold_Demo );
    
      /// Call the function to initialize
      Threshold_Demo( 0, 0 );
    
      /// Wait until user finishes program
      while(true)
      {
        int c;
        c = waitKey( 20 );
        if( (char)c == 27 )
          { break; }
       }
    }
    
    
    void Threshold_Demo( int, void* )
    {
      /* 0: Binary
         1: Binary Inverted
         2: Threshold Truncated
         3: Threshold to Zero
         4: Threshold to Zero Inverted
       */
    
      threshold( src_gray, dst, threshold_value, max_BINARY_value,threshold_type );
    
      imshow( window_name, dst );
    } 
    ------------

    Result:


    Sources:
    http://docs.opencv.org/doc/tutorials/imgproc/threshold/threshold.html