This Publication presents an efficient method for image stitching and blending to create seamless panoramic views. The method leverages homography transformation, image warping, and interactive point selection to align and blend images effectively. Experimental results demonstrate the robustness of the proposed approach in various settings, highlighting its potential applications in panoramic photography, virtual reality, and other fields requiring high-quality image composites. The use of manual point selection allows for precise alignment, making the process intuitive and customizable.
Image stitching techniques generally fall into two categories: feature-based methods and direct methods. Featurebased methods detect and match key points between images, whereas direct methods focus on optimizing pixel intensities for alignment.
• SIFT (Scale-Invariant Feature Transform): Detects and describes local features in images.
• SURF (Speeded-Up Robust Features): A faster alternative to SIFT.
• ORB (Oriented FAST and Rotated BRIEF): real-time, efficient feature detection algorithm.
• Lucas-Kanade Method: A gradient-based method that minimizes intensity differences.
• Optical Flow: Estimates motion between two images based on pixel intensities.
Fig. 1: Results Of Existing Methods (View 1) | Fig. 2: Results Of Existing Methods (View 2) |
• Feature-Based Methods: Can be computationally intensive and prone to mismatches, especially in images with low texture.
• Direct Methods: May suffer from convergence issues and require good initial alignment.
Traditional image stitching methods often face challenges such as misalignment, visible seams, and computational inefficiency. These challenges can lead to artifacts and reduced quality in the final composite image, limiting the usability of these methods in real-world applications.
To obtain accurate homography, selecting corresponding points in both images is crucial. This paper utilizes an interactive point selection method where users manually select points that correspond to the same physical location in both images. This allows for precise alignment and accommodates varying image conditions.
# Python code snippet for point selection def select_point(event, x, y, flags, param): if event == cv2.EVENT_LBUTTONDOWN: points.append((x, y)) cv2.circle(image, (x, y), 5, (0, 0, 255), -1) cv2.imshow('Image', image)
The interactive selection is done through mouse callbacks, allowing users to visually inspect and choose points, which increases the accuracy of the transformation.
Homography is a transformation that maps points from one plane to another. It is calculated using the selected points and the RANSAC (Random Sample Consensus) algorithm to handle outliers, ensuring robust alignment even with a few incorrect points.
# Python code snippet for homography H, status = cv2.findHomography(points_img2, points_img1, method=cv2.RANSAC)
Using the computed homography matrix, the second image is transformed to align with the first image. This involves applying a perspective transformation to map the second image’s points to the corresponding points in the first image.
# Python code snippet for image warping img2_warped = cv2.warpPerspective(img2, H, (width, height))
Blending is performed using a weighted average approach to ensure a smooth transition between the images. This helps to minimize visible seams and artifacts, resulting in a visually appealing panoramic image.
# Python code snippet for image blending blended = cv2.addWeighted(img1, 0.5, img2_warped, 0.5, 0)
The size of the output canvas is calculated based on the transformed corner points of both images. This ensures that the entire content of both images is preserved in the final result, without cropping important areas.
# Python code snippet for Output Canvas corners_img1 = np.array([[0, 0], [width, 0], [width, height], [0, height]], dtype=float) corners_img2 = np.array([[0, 0], [img2.shape[1], 0], [img2.shape[1], img2.shape[0]], [0, img2.shape[0]]], dtype=float) corners_img2_warped = cv2.perspectiveTransform(corners_img2.reshape(-1, 1, 2), H).reshape(-1, 2)
Interactive cropping allows users to select four points to define the region of interest. This region is then extracted from the blended image. The interactive selection process ensures
that the user can tailor the final output to specific areas of interest, enhancing the usability of the stitched image.
# Python code snippet for interactive cropping with user-defined points def SelectCropPoints(event, x, y, flags, param): if event == cv2.EVENT_LBUTTONDOWN: points_crop.append((x, y)) cv2.circle(combined_image, (x, y), 5, (0, 255, 0), -1) if len(points_crop) == 4: cv2.polylines(combined_image, [np.array(points_crop)], isClosed=True, color=(0, 255, 0), thickness=2)
Fig. 3: Manual Point Selection
Fig. 4: Image Warping and Blending | Fig. 5: Crop Point Selection |
Fig. 6: Cropped Image | Fig. 7: Manually Stitched Image |
There are no datasets linked
There are no models linked
There are no datasets linked
There are no models linked