Revealing Stereo And 3D

Autofocus
Home | Downloads | DCB | Store | Boot | Xprmnts | Intro | Bib | Motion | Focus | Sgmnt | Rcgntn | Depth | Build | Fail
I don't know whether this technique is a failure or a success, but never the less is worth a mention. It will be updated as and when I discover new things, so even though the present technique may be a failure, there may be a break through soon, so keep reading!

This is an auto focus technique that does not use passive methods such as contrast or phase difference detection or an active method such as using a laser or an IR beam to know the distance of the object. It modulates/modifies the light ray itself in such a way that it can be detected at the sensor and used to focus, accurately. What kind of modification will be done is not know completely at the time of writing this document, but a few possible ways are described.

Assumption 1: Light is reflected spherically in all directions from a point.

            Put a dot on the wall near you. Try looking at the dot from various angles. You can now divide the space around you into two categories; a space from where you can see the dot, and from where you cannot. If you try looking at various points on different objects in a similar way, you will soon find out that light is reflected from a point spherically in all directions i.e. if I have a point source in space, you will be able to look at that source from any point around it. Focus and out of focus are actually a result of this as is explained in detail in [1].

Assumption 2: At a particular point light converges from all the points that can be seen from it.

            Stand in front of a mirror and select a reference point on it, do not mark anything, just remember it. Move around the mirror such that the reference you selected is seen form all these positions. Now suppose you look at the image that is formed through this reference, (because image in a mirror is always formed behind it) it will be different from each position. By applying a simple rule one can find out what image will be formed through this reference when you are in a particular place. Here’s how it is done. Draw an imaginary normal at that reference you considered before. Draw a line from the position from where you see this reference and make a note of the angle. Now symmetrically opposite to it extend a line at the noted angle. Whichever object the line meets is the image you see through the reference point. This means that:

  1. Through/at a single point on the mirror people standing at different places can see different images.
  2. Image of a particular point can be seen at any point on the mirror by appropriately adjusting our position of view.

Combining these two statements one can conclude that the assumption 2 made above is valid. A mirror was taken just as an example to make a reader’s understanding simple. The assumption holds good for any surface.

Modifications required in the lens:

            Suppose I draw a circle around the axis (line passing through the center of the lens, light passing through the axis does not bend) of the lens, such that even with a minimum aperture (if the lens has a variable aperture as in cameras and our eye) the circle is visible. Let the light rays going through this circle be modified in some way (cosine modulated or polarized), so that it can be detected using some sensor. Consider a point (P) lying on the axis of the lens (L) at some distance d from it. ‘C’ is the circumference through which light will undergo modification.

lens01.jpg

Since light is modified only through the circumference ‘C’ and only the modified light is detected, we can neglect the lens as a whole and consider only this circumference from now on. Even though this assumption is made, it can be easily said that it does not bring about any filtering in the surrounding light, but only reduces its intensity. At the sensor since we are not testing for the intensity but only for the modification, and in the present case the modification exists for all the rays passing through ‘C’, there is nothing we can do to detect a particular point or to focus. So here comes the next assumption.

Assumption 3: Only the rays that cut the axis or lie on the axis are modified.

            What does this mean?

lens02.jpg

        Consider 5 particles A, B, C, D and E. As explained earlier, light is reflected in all the directions from these particles. In order to make the explanation and understanding simple only one ray will be considered here. The theory works even without this assumption, as will be seen later. B and E are on the axis and D cuts the axis when extended as shown by a dotted line. A is on the right side of the lens and C on the left and their rays intersect the circumference on the right and left side respectively. Note that C does not cut the axis even though it appears to, in the diagram. According to the third assumption only the rays from points B, D and E undergo modification and the rest remain unchanged.

            Let’s now break the assumption made at the start of the previous paragraph. Even though particle E reflects light in all the directions, the one that interests us are those that pass through the circumference. Let’s consider point E alone.

lens03.jpg

A sensor ‘S’ is placed on the other side of the lens. The sensor is placed such that particle E does not form a focused image on it. Therefore what we get is a circle (the rays are continuous on the circumference, only a few are shown here), whose size depends on the position of the sensor. If the sensor is placed at the point where rays from E converge completely, we get a point image (circle with radius = 0). If it is placed either beyond or ahead of this point we get a circle whose radius is proportional to the offset distance. Here the circle is just the circumference and not a circular plane. Now you know why the entire lens was not taken into account. Here what we detect is whether the rays are modified or not, so if the entire lens were taken, a circle inside a circle wouldn’t be detectable, which is a case as will be seen later. In order to focus a particle lying on the axis, we just need to detect a circle and converge it to a point. As discussed in [1], we might not get a complete circle every time, so a circular arch is what we need to look at.

Now let’s consider some other particle, not on the axis, let it be particle A. We can analyze this in two cases.

Case 1: Particle inside the cylindrical pipe ‘P’ that passes through the circumference on the lens. In this case there can only be two rays from a particle that can pass through the axis, giving an illusion that the ray is actually coming from Af, the far point on the axis or from An, the near point. This forms two image points on the sensor, since the other rays are not modified. Since there are innumerably many such particles, a random pattern is formed on the sensor. ‘Af’ and ‘An’ are now virtual particles.

lens04.jpg

Case 2: Point outside the cylindrical pipe that passes through the circumference on the lens. Now as it can be seen there can only be one ray for each particle that can pass through the axis of the lens. In any case they fail to form a circular patch and only a point image is formed.

lens05.jpg

From the above deductions it can be clearly seen that every point in the surrounding has at least one point that passes through the axis of the lens. In the case of the particle lying inside the cylindrical pipe ‘P’, there are two rays and in the case of a particle lying on the axis infinitely many lines in the form of a circle are present. This shows that at the lens end, it seems as though there are virtual particles continuously on the axis of the lens. This is as shown below.

lens06.jpg

As can be seen from the above diagram, light rays from real particles B to J create a virtual particle at ‘a’. Particles B to J are taken randomly and the actual particles can lie anywhere on the continuous rays shown. In the surrounding space we can always find rays that form virtual particles continuous at each and every place on the axis of the lens. Because of the presence of such virtual particles the sensor becomes populated completely and detecting circles becomes impossible. So what is the way out of it? One method is to scrape the technique and the other is to sample the axis all along its length starting from the point of minimum focus and find the place where real particles are present. In order to achieve this it is important to distinguish a real from a virtual particle. For a real particle, the reflected or scattered rays originate at the particle, while for a virtual particle it takes place at different particles. But how do I detect this?

            If detecting this is the only way out, it seems to me like an impossible feat, but all of a sudden what if I assume that the virtual particles cannot be created because waves are particles and particles are waves, quantum theory, bla, bla, bla….. If there are no virtual particles on the axis, we have finally done it! Thanks to quantum theory which works on probabilities rather than exactness. Hold on, I still see a problem! While distinguishing between real and virtual particles I have committed a blunder. As you can see in the diagram below, for the far virtual particle ‘Af’, the light ray starts at the particle itself!

lens07.jpg

        Back to square one! BUT, if we make an assumption that the particle we are trying to focus is the first point on the axis, and all the far virtual particles are behind it, there seems to be no problem to me at this point of time.

            There is one more thing that’s worth discussing. It’s not a problem, but is worth a mention. Suppose we are looking at a line on the axis and the aperture is big enough to collect light rays from at least some initial length of the lens, we would get concentric circles on the sensor. The point we have to focus is the inner most circle, that is the first point on the line from the lens side. See the figure below.

lens08.jpg

        In reality points ‘A’ and ‘B’ are not discrete but continuous along the length of the line, so only the inner circle is detectable on the sensor. It is like a large plate with a circular hole at the center. What we need to focus on is point ‘A’ and this is solved, by converging the inner circle to a point at the center. But practically both the near and far virtual points are a reality and so the assumption in the name of quantum physics is definitely wrong. There is nothing related to quantum physics here, I just selected that name because it appears to be fancy name for me. But quantum physics is definitely going to make an entry at a later point of time. So once again we are back to the same problem, how do we distinguish real from virtual points? Let me think for a while………

            All of a sudden, I get this idea! What if we have two such light modifying circles on the lens. I know, that constructing one such circle is itself not verified yet, but let’s assume, because for something to become practical, it must first exist in theory. More to this, let there be a mechanism to switch between the two. Now for a virtual particle its position changes when we switch from one light modifying circle to the other, while a real particle stays undisturbed as shown by the two diagrams below.

lens09.jpg

lens10.jpg

The light rays passing through the smaller aperture are not shown in both the figures. Even the image formed on the sensor is censored. The basic concept is shown in the below two diagrams.

lens11.jpg

lens12.jpg

        So finally after all this research, explanation, experimentations and assumptions we are back to square one! In the above case we knew where the real particles were. In a practical system this is what we are supposed to find. Therefore there is no way to find out whether the particle is real or virtual with two or even more light modifying circles. That is because if the location of the particle is not known, then we do not have any information to distinguish the real particles from the virtual ones. The virtual particles behave completely like the real ones, except for the fact that they do not have a ray passing through the optical axis of the lens! So how can this, be made use of to distinguish between the two?

            I think I have got it! Our aim here is to collect the single ray in space that is unique to the particle under consideration. So instead of having a circle, through which light gets modified, let’s now have a point, a point that allows only a single ray to pass through it, or modifies only a single ray that arrives at that point at a particular angle. Here’s what I mean.

lens13.jpg

sc – center of the sensor.

lc – center of the lens.

ma – distance between the modulating point and the center of the lens.

sl – distance between the center of the lens and the center of the sensor.

theta – angle between the line joining sc and the modulating point and the optical axis.

Knowing the above defined values we can always tilt the entire lens and sensor assembly so that the ray that is now passing through the optical axis passes through the modulating point. In the diagram instead of tilting the lens and sensor assembly let’s relatively tilt the ray itself. If theta is the angle ……..
 
    Back to square one for the third time. There is a saying: “More the number of times you meet failure, the closer you are to success”. Am I really getting closer to success! I will first explain why the above concept also fails, even though it feels like heaven at the start.

lens14.jpg

        When the lens is focused to point ‘A’, the image of the point ‘A’ is formed at the center of the sensor (provided ‘A’ is on the optical axis). Suppose ‘A’ is not a real particle, and ‘B’ is such that a ray from ‘B’ passes through ‘A’, then we can imagine a virtual particle at ‘A’. If this is true we should get the image of the particle ‘B’ at the center of the sensor. It actually WORKS! The reason is that, if the lens is focused to ‘B’ the image should be formed at the point shown, i.e. below the center of the sensor at the same vertical distance ‘d’ at which particle ‘B’ lies below the optical axis. Just observe this, when the focus changes from ‘A’ to ‘B’, the image formed at the center of the sensor changes from ‘A’ to ‘B’. Actually the image formed at the center of the sensor is continuously moving from ‘A’ to ‘B’ along the ray connecting the points ‘A’ and ‘B’. This means that it is the ray that is important and not the particle. If the lens is focused at ‘A’, there may not be any real particle at ‘A’ to form a spot at the center of the sensor. A particle may lie anywhere on the line joining ‘A and ‘B’ and extending infinitely on both the sides. So with a single ray it is impossible to find the exact position of the particle. But as seen earlier even if many rays from a single point are considered, it becomes impossible to find out whether the particle present there is a real or a virtual one.
 
 
<<Prev    Next>>
 
Here are some links to my photography gallaries: