original image. 128x128 pixels, the white circle has a diameter of 8 pixels.//Applying fft2() on the image and computing the intensity values using abs().
I = imread('act6.jpg');
Igray = im2gray(I);
FIgray = fft2(Igray);
imshow(abs(FIgray),[]);
//Applying fftshift() on the image.
imshow(fftshift(abs(FIgray)), []);
**The FT observed is consistent with the analytical FT of a circle.
//Applying fft2() again on the fft2 output FIgray.
imshow(abs(fft2(FIgray)),[]);
**Applying 2D fft on the image twice outputs what looks like the same image.
**At first I tried the exercise with a larger white circle:
diameter: 64 pixels.
but after applying fft2 on it the first time, it gave an intensity image that was barely distinguishable.**Applying the same process for a 128x128 pixel image with a small A:
**even though the twice-fft-ed circle looked like the original, as seen in this last image, it actually outputs an image that's the original flipped vertically.
B. Simulation of an imaging device
Object Image."Aperture" Image.
Code:
r=imread('C:\MyDocuments\AP186\act6circle.jpg'); a=imread('C:\MyDocuments\AP186\act6VIP.jpg');
rgray = im2gray(r);
agray = im2gray(a);
Fr = fftshift(rgray);
Fa = fft2(agray);
FRA = Fr.*(Fa);
IRA = fft2(FRA);
FImage = abs(IRA);
imshow(FImage, [ ]);
"Imaged" VIP:**As expected from the results of the last exercise, the resulting image shows the "VIP" flipped vertically and clearly distinguishable, albeit a bit blurry and more gray than white.
**The 8pixel-diameter white circle image of the first part was used as the "aperture" image and it yielded the following convoluted image where there is no obvious trace at all of the original object.**For comparison, a 128pixel-diameter white circle was used as the "aperture" image and it yielded the following image where the "VIP" is clearer and the contrast is even better than when the 64pixel diameter white circle was used.The smallest circle yielded a convoluted image that showed more circle than "VIP", while the medium circle yielded an image where the "VIP" could be read but the contrast needed improvement, the largest aperture yielded the "best" image where the VIP was not only clear, but the contrast was quite obviously better, closer to the original/"real" object. These results coincide with the idea of the circles as the size of the aperture of a digital camera. As said in the manual: "A finite lens radius means the lens can only gather a limited number of rays reflected off an object therefore reconstruction of the object is never perfect.", it follows that a smaller aperture/circle would let pass less "information" than what would be transmitted through a larger aperture/circle.
C. Template Matching using correlation
The original image.The template image.//Both images were opened in grayscale and the FT of A and the conjugate of the FT of the text image were multiplied element-per-element. conj() was used to get the complex conjugate, and the inverse FFT was computed.
Code:
r=imread('act6spain.jpg');
a=imread('act6A.jpg');
rgray = im2gray(r);
agray = im2gray(a);
Fr = fft2(agray);
Fa = fft2(rgray);
FRA = Fa.*conj(Fr);
IRA = fft2(FRA);
shift= fftshift(IRA);
FImage = abs(shift);
imshow(FImage, []);
Output:**The distinct small white dots in the image are the peaks of the final inverse FT, these indicate where the A's of the image should be if the original image were flipped vertically."the rAin in spAin stAys mAinly in the plAin" The template matching method "finds" and indicates the presence of the template image in the original image.
**The same method was used with the words "IN" and "THE" and they yielded the following images, indicating with the white dots where the words are found in the sentence.
the raIN IN spaIN stays maINly IN the plaIN
THE rain in spain stays mainly in THE plain**Alternately if you look for something that isn't there, like "NOM",you get a blank image.
D. Edge detection using the convolution integral
\\A 3x3 pattern of an edge is called in scilab where the total sum is zero. This is then convolved with the VIP image of part 2 using imcorrcoef().
Code:
im=imread('act6VIP.jpg');
img=im2gray(im);
p = [-1 -1 -1; -1 8 -1; -1 -1 -1];
c=imcorrcoef(img,p);
imshow(c);
This results in the image:**The horizontal orientation of the matrix is seen in the resultant image, as well as the edges of the VIP image.
**Other directional patterns like vertical [−1 2 −1] yields much of the same but with vert lines.
−1 2 −1
[−1 2 −1]A spot pattern [−1 −1 −1] yields the image that most portrays the original VIP image and its
−1 8 −1
[−1 −1 −1]
edges, the spots are not confined to one direction so the formation of the image edges is easier.
**I give myself a score of 10 for this activity because I was able to comply with all the requirements by myself with no difficulties, with the code and obtaining the desired images, during the time allotted for the subject.
I = imread('act6.jpg');
Igray = im2gray(I);
FIgray = fft2(Igray);
imshow(abs(FIgray),[]);
//Applying fftshift() on the image.
imshow(fftshift(abs(FIgray)), []);
**The FT observed is consistent with the analytical FT of a circle.
//Applying fft2() again on the fft2 output FIgray.
imshow(abs(fft2(FIgray)),[]);
**Applying 2D fft on the image twice outputs what looks like the same image.
**At first I tried the exercise with a larger white circle:
diameter: 64 pixels.
but after applying fft2 on it the first time, it gave an intensity image that was barely distinguishable.**Applying the same process for a 128x128 pixel image with a small A:
**even though the twice-fft-ed circle looked like the original, as seen in this last image, it actually outputs an image that's the original flipped vertically.
B. Simulation of an imaging device
Object Image."Aperture" Image.
\\Both images are opened in Scilab in grayscale, and the 2D FFT of the VIP image and fftshift of the aperture image were taken. The product of their FFT was taken and get the inverse of this was used to get the convolved image.
Code:
r=imread('C:\MyDocuments\AP186\act6circle.jpg'); a=imread('C:\MyDocuments\AP186\act6VIP.jpg');
rgray = im2gray(r);
agray = im2gray(a);
Fr = fftshift(rgray);
Fa = fft2(agray);
FRA = Fr.*(Fa);
IRA = fft2(FRA);
FImage = abs(IRA);
imshow(FImage, [ ]);
"Imaged" VIP:**As expected from the results of the last exercise, the resulting image shows the "VIP" flipped vertically and clearly distinguishable, albeit a bit blurry and more gray than white.
**The 8pixel-diameter white circle image of the first part was used as the "aperture" image and it yielded the following convoluted image where there is no obvious trace at all of the original object.**For comparison, a 128pixel-diameter white circle was used as the "aperture" image and it yielded the following image where the "VIP" is clearer and the contrast is even better than when the 64pixel diameter white circle was used.The smallest circle yielded a convoluted image that showed more circle than "VIP", while the medium circle yielded an image where the "VIP" could be read but the contrast needed improvement, the largest aperture yielded the "best" image where the VIP was not only clear, but the contrast was quite obviously better, closer to the original/"real" object. These results coincide with the idea of the circles as the size of the aperture of a digital camera. As said in the manual: "A finite lens radius means the lens can only gather a limited number of rays reflected off an object therefore reconstruction of the object is never perfect.", it follows that a smaller aperture/circle would let pass less "information" than what would be transmitted through a larger aperture/circle.
C. Template Matching using correlation
The original image.The template image.//Both images were opened in grayscale and the FT of A and the conjugate of the FT of the text image were multiplied element-per-element. conj() was used to get the complex conjugate, and the inverse FFT was computed.
Code:
r=imread('act6spain.jpg');
a=imread('act6A.jpg');
rgray = im2gray(r);
agray = im2gray(a);
Fr = fft2(agray);
Fa = fft2(rgray);
FRA = Fa.*conj(Fr);
IRA = fft2(FRA);
shift= fftshift(IRA);
FImage = abs(shift);
imshow(FImage, []);
Output:**The distinct small white dots in the image are the peaks of the final inverse FT, these indicate where the A's of the image should be if the original image were flipped vertically."the rAin in spAin stAys mAinly in the plAin" The template matching method "finds" and indicates the presence of the template image in the original image.
**The same method was used with the words "IN" and "THE" and they yielded the following images, indicating with the white dots where the words are found in the sentence.
the raIN IN spaIN stays maINly IN the plaIN
THE rain in spain stays mainly in THE plain**Alternately if you look for something that isn't there, like "NOM",you get a blank image.
D. Edge detection using the convolution integral
\\A 3x3 pattern of an edge is called in scilab where the total sum is zero. This is then convolved with the VIP image of part 2 using imcorrcoef().
Code:
im=imread('act6VIP.jpg');
img=im2gray(im);
p = [-1 -1 -1; -1 8 -1; -1 -1 -1];
c=imcorrcoef(img,p);
imshow(c);
This results in the image:**The horizontal orientation of the matrix is seen in the resultant image, as well as the edges of the VIP image.
**Other directional patterns like vertical [−1 2 −1] yields much of the same but with vert lines.
−1 2 −1
[−1 2 −1]A spot pattern [−1 −1 −1] yields the image that most portrays the original VIP image and its
−1 8 −1
[−1 −1 −1]
edges, the spots are not confined to one direction so the formation of the image edges is easier.
**I give myself a score of 10 for this activity because I was able to comply with all the requirements by myself with no difficulties, with the code and obtaining the desired images, during the time allotted for the subject.
No comments:
Post a Comment