执行cv :: warp对一组cv :: Point上的伪偏移进行透视
c++
image-processing
opencv
7
0

我正在尝试对一组点进行透视变换 ,以实现偏移校正效果:

http://nuigroup.com/?ACT=28&fid=27&aid=1892_H6eNAaign4Mrnn30Au8d

我正在使用下面的图像进行测试, 绿色矩形显示感兴趣的区域。

我想知道是否可以通过使用cv::getPerspectiveTransformcv::warpPerspective的简单组合来实现我希望的效果。我正在共享到目前为止编写的源代码,但是它不起作用。这是结果图像:

因此,有一个vector<cv::Point> 定义了感兴趣的区域 ,但是这些点并没有以任何特定的顺序存储在向量中,这在检测过程中我是无法更改的。无论如何, 稍后 ,向量中的RotatedRect用于定义RotatedRect ,后者又用于组装cv::Point2f src_vertices[4]; ,是cv::getPerspectiveTransform()所需的变量之一。

我对顶点以及顶点的组织方式的理解可能是其中一个问题 。我还认为,使用RotatedRect并不是存储ROI原始点的最佳方法 ,因为坐标会稍有变化以适合旋转后的矩形,而且不是很酷

#include <cv.h>
#include <highgui.h>
#include <iostream>

using namespace std;
using namespace cv;

int main(int argc, char* argv[])
{
    cv::Mat src = cv::imread(argv[1], 1);

    // After some magical procedure, these are points detect that represent 
    // the corners of the paper in the picture: 
    // [408, 69] [72, 2186] [1584, 2426] [1912, 291]
    vector<Point> not_a_rect_shape;
    not_a_rect_shape.push_back(Point(408, 69));
    not_a_rect_shape.push_back(Point(72, 2186));
    not_a_rect_shape.push_back(Point(1584, 2426));
    not_a_rect_shape.push_back(Point(1912, 291));

    // For debugging purposes, draw green lines connecting those points 
    // and save it on disk
    const Point* point = &not_a_rect_shape[0];
    int n = (int)not_a_rect_shape.size();
    Mat draw = src.clone();
    polylines(draw, &point, &n, 1, true, Scalar(0, 255, 0), 3, CV_AA);
    imwrite("draw.jpg", draw);

    // Assemble a rotated rectangle out of that info
    RotatedRect box = minAreaRect(cv::Mat(not_a_rect_shape));
    std::cout << "Rotated box set to (" << box.boundingRect().x << "," << box.boundingRect().y << ") " << box.size.width << "x" << box.size.height << std::endl;

    // Does the order of the points matter? I assume they do NOT.
    // But if it does, is there an easy way to identify and order 
    // them as topLeft, topRight, bottomRight, bottomLeft?
    cv::Point2f src_vertices[4];
    src_vertices[0] = not_a_rect_shape[0];
    src_vertices[1] = not_a_rect_shape[1];
    src_vertices[2] = not_a_rect_shape[2];
    src_vertices[3] = not_a_rect_shape[3];

    Point2f dst_vertices[4];
    dst_vertices[0] = Point(0, 0);
    dst_vertices[1] = Point(0, box.boundingRect().width-1);
    dst_vertices[2] = Point(0, box.boundingRect().height-1);
    dst_vertices[3] = Point(box.boundingRect().width-1, box.boundingRect().height-1);

    Mat warpMatrix = getPerspectiveTransform(src_vertices, dst_vertices);

    cv::Mat rotated;
    warpPerspective(src, rotated, warpMatrix, rotated.size(), INTER_LINEAR, BORDER_CONSTANT);

    imwrite("rotated.jpg", rotated);

    return 0;
}

有人可以帮我解决这个问题吗?

参考资料:
Stack Overflow
收藏
评论
共 5 个回答
高赞 时间 活跃

更新:已解决

我几乎要工作了。如此接近可用。它偏斜校正正确,但是我似乎有缩放或转换问题。我已经将锚点设置为零,并且还尝试了更改缩放模式(aspectFill,缩放以适合等)。

设置偏斜校正点(红色使它们难以看到): 在此处输入图片说明

应用计算出的变换: 在此处输入图片说明

现在它偏斜了。除了它不在屏幕上居中之外,这看起来还不错。通过向图像视图添加平移手势,我可以将其拖动并验证其是否对齐: 在此处输入图片说明

这并不像用-0.5,-0.5转换那样简单,因为原始图像变成了一个非常远(可能)延伸的多边形,因此其边界矩形比屏幕框架大得多。

有谁看到我能做些什么来解决这个问题?我想承诺并在这里分享。这是一个受欢迎的话题,但我还没有找到像复制/粘贴一样简单的解决方案。

完整的源代码在这里:

git clone https://github.com/zakkhoyt/Quadrilateral.git

git checkout演示

但是,我将在此处粘贴相关部分。第一种方法是我的方法,是获得偏移校正点的地方。

- (IBAction)buttonAction:(id)sender {

    Quadrilateral quadFrom;
    float scale = 1.0;
    quadFrom.topLeft.x = self.topLeftView.center.x / scale;
    quadFrom.topLeft.y = self.topLeftView.center.y / scale;
    quadFrom.topRight.x = self.topRightView.center.x / scale;
    quadFrom.topRight.y = self.topRightView.center.y / scale;
    quadFrom.bottomLeft.x = self.bottomLeftView.center.x / scale;
    quadFrom.bottomLeft.y = self.bottomLeftView.center.y / scale;
    quadFrom.bottomRight.x = self.bottomRightView.center.x / scale;
    quadFrom.bottomRight.y = self.bottomRightView.center.y / scale;

    Quadrilateral quadTo;
    quadTo.topLeft.x = self.view.bounds.origin.x;
    quadTo.topLeft.y = self.view.bounds.origin.y;
    quadTo.topRight.x = self.view.bounds.origin.x + self.view.bounds.size.width;
    quadTo.topRight.y = self.view.bounds.origin.y;
    quadTo.bottomLeft.x = self.view.bounds.origin.x;
    quadTo.bottomLeft.y = self.view.bounds.origin.y + self.view.bounds.size.height;
    quadTo.bottomRight.x = self.view.bounds.origin.x + self.view.bounds.size.width;
    quadTo.bottomRight.y = self.view.bounds.origin.y + self.view.bounds.size.height;

    CATransform3D t = [self transformQuadrilateral:quadFrom toQuadrilateral:quadTo];
//    t = CATransform3DScale(t, 0.5, 0.5, 1.0);
    self.imageView.layer.anchorPoint = CGPointZero;
    [UIView animateWithDuration:1.0 animations:^{
        self.imageView.layer.transform = t;
    }];

}


#pragma mark OpenCV stuff...
-(CATransform3D)transformQuadrilateral:(Quadrilateral)origin toQuadrilateral:(Quadrilateral)destination {

    CvPoint2D32f *cvsrc = [self openCVMatrixWithQuadrilateral:origin];
    CvMat *src_mat = cvCreateMat( 4, 2, CV_32FC1 );
    cvSetData(src_mat, cvsrc, sizeof(CvPoint2D32f));


    CvPoint2D32f *cvdst = [self openCVMatrixWithQuadrilateral:destination];
    CvMat *dst_mat = cvCreateMat( 4, 2, CV_32FC1 );
    cvSetData(dst_mat, cvdst, sizeof(CvPoint2D32f));

    CvMat *H = cvCreateMat(3,3,CV_32FC1);
    cvFindHomography(src_mat, dst_mat, H);
    cvReleaseMat(&src_mat);
    cvReleaseMat(&dst_mat);

    CATransform3D transform = [self transform3DWithCMatrix:H->data.fl];
    cvReleaseMat(&H);

    return transform;
}

- (CvPoint2D32f*)openCVMatrixWithQuadrilateral:(Quadrilateral)origin {

    CvPoint2D32f *cvsrc = (CvPoint2D32f *)malloc(4*sizeof(CvPoint2D32f));
    cvsrc[0].x = origin.topLeft.x;
    cvsrc[0].y = origin.topLeft.y;
    cvsrc[1].x = origin.topRight.x;
    cvsrc[1].y = origin.topRight.y;
    cvsrc[2].x = origin.bottomRight.x;
    cvsrc[2].y = origin.bottomRight.y;
    cvsrc[3].x = origin.bottomLeft.x;
    cvsrc[3].y = origin.bottomLeft.y;

    return cvsrc;
}

-(CATransform3D)transform3DWithCMatrix:(float *)matrix {
    CATransform3D transform = CATransform3DIdentity;

    transform.m11 = matrix[0];
    transform.m21 = matrix[1];
    transform.m41 = matrix[2];

    transform.m12 = matrix[3];
    transform.m22 = matrix[4];
    transform.m42 = matrix[5];

    transform.m14 = matrix[6];
    transform.m24 = matrix[7];
    transform.m44 = matrix[8];

    return transform; 
}

更新:我的工作正常。坐标必须在中心而不是左上角。我应用了xOffset和yOffset以及中提琴。上面提到的位置的演示代码(“ demo”分支)

收藏
评论

当使用四边形时,OpenCV并不是您真正的朋友。 RotatedRect将给您不正确的结果。另外,您将需要透视投影而不是像这里提到的其他仿射投影。

基本上必须做的是:

  • 遍历所有多边形线段并将几乎相等的线段连接起来。
  • 对它们进行排序,以便获得4个最大的线段。
  • 与这些线相交,您将获得四个最可能的拐角点。
  • 在从已知对象的拐角点和纵横比收集的透视图上变换矩阵。

我实现了一个Quadrangle类,该类负责轮廓到四边形的转换,并且还将在正确的角度对其进行转换。

在此处查看有效的实现: Java OpenCV校正轮廓

收藏
评论

因此,第一个问题是边角顺序。它们在两个向量中的顺序必须相同。因此,如果在第一个向量中您的顺序是:(左上,左下,右下,右上),则它们在另一个向量中的顺序必须相同。

其次,要使生成的图像仅包含感兴趣的对象,必须将其宽度和高度设置为与生成的矩形的宽度和高度相同。不用担心,warpPerspective中的src和dst图像可以是不同的大小。

第三,性能问题。尽管您的方法绝对准确,但是由于您仅执行仿射变换(旋转,调整大小,去歪斜),因此在数学上,您可以使用函数的仿射核心。他们快得多

  • getAffineTransform()

  • warpAffine()。

重要说明:getAffine变换仅需要且仅需3点,结果矩阵为2×3,而不是3×3。

如何使结果图像的大小与输入的大小不同:

cv::warpPerspective(src, dst, dst.size(), ... );

采用

cv::Mat rotated;
cv::Size size(box.boundingRect().width, box.boundingRect().height);
cv::warpPerspective(src, dst, size, ... );

到此,您的编程任务结束了。

void main()
{
    cv::Mat src = cv::imread("r8fmh.jpg", 1);


    // After some magical procedure, these are points detect that represent 
    // the corners of the paper in the picture: 
    // [408, 69] [72, 2186] [1584, 2426] [1912, 291]

    vector<Point> not_a_rect_shape;
    not_a_rect_shape.push_back(Point(408, 69));
    not_a_rect_shape.push_back(Point(72, 2186));
    not_a_rect_shape.push_back(Point(1584, 2426));
    not_a_rect_shape.push_back(Point(1912, 291));

    // For debugging purposes, draw green lines connecting those points 
    // and save it on disk
    const Point* point = &not_a_rect_shape[0];
    int n = (int)not_a_rect_shape.size();
    Mat draw = src.clone();
    polylines(draw, &point, &n, 1, true, Scalar(0, 255, 0), 3, CV_AA);
    imwrite("draw.jpg", draw);

    // Assemble a rotated rectangle out of that info
    RotatedRect box = minAreaRect(cv::Mat(not_a_rect_shape));
    std::cout << "Rotated box set to (" << box.boundingRect().x << "," << box.boundingRect().y << ") " << box.size.width << "x" << box.size.height << std::endl;

    Point2f pts[4];

    box.points(pts);

    // Does the order of the points matter? I assume they do NOT.
    // But if it does, is there an easy way to identify and order 
    // them as topLeft, topRight, bottomRight, bottomLeft?

    cv::Point2f src_vertices[3];
    src_vertices[0] = pts[0];
    src_vertices[1] = pts[1];
    src_vertices[2] = pts[3];
    //src_vertices[3] = not_a_rect_shape[3];

    Point2f dst_vertices[3];
    dst_vertices[0] = Point(0, 0);
    dst_vertices[1] = Point(box.boundingRect().width-1, 0); 
    dst_vertices[2] = Point(0, box.boundingRect().height-1);

   /* Mat warpMatrix = getPerspectiveTransform(src_vertices, dst_vertices);

    cv::Mat rotated;
    cv::Size size(box.boundingRect().width, box.boundingRect().height);
    warpPerspective(src, rotated, warpMatrix, size, INTER_LINEAR, BORDER_CONSTANT);*/
    Mat warpAffineMatrix = getAffineTransform(src_vertices, dst_vertices);

    cv::Mat rotated;
    cv::Size size(box.boundingRect().width, box.boundingRect().height);
    warpAffine(src, rotated, warpAffineMatrix, size, INTER_LINEAR, BORDER_CONSTANT);

    imwrite("rotated.jpg", rotated);
}
收藏
评论

问题在于在矢量中声明点的顺序,然后在dst_vertices的定义上还有另一个与此相关的问题。

点的顺序getPerspectiveTransform() ,必须按以下顺序指定:

1st-------2nd
 |         |
 |         |
 |         |
3rd-------4th

因此,需要将起点重新排序为:

vector<Point> not_a_rect_shape;
not_a_rect_shape.push_back(Point(408, 69));
not_a_rect_shape.push_back(Point(1912, 291));
not_a_rect_shape.push_back(Point(72, 2186));
not_a_rect_shape.push_back(Point(1584, 2426));

和目的地:

Point2f dst_vertices[4];
dst_vertices[0] = Point(0, 0);
dst_vertices[1] = Point(box.boundingRect().width-1, 0); // Bug was: had mistakenly switched these 2 parameters
dst_vertices[2] = Point(0, box.boundingRect().height-1);
dst_vertices[3] = Point(box.boundingRect().width-1, box.boundingRect().height-1);

此后,需要进行一些裁剪,因为生成的图像不只是我认为的绿色矩形内的区域:

我不知道这是OpenCV的bug还是缺少什么,但主要问题已解决。

收藏
评论

我遇到了同样的问题,并使用OpenCV的单应性提取功能对其进行了修复。

您可以看到我在以下问题中的工作方式: 使用CATransform3D将矩形图像转换为四边形

收藏
评论
新手导航
  • 社区规范
  • 提出问题
  • 进行投票
  • 个人资料
  • 优化问题
  • 回答问题