Monday, May 28, 2012

Week 9 - Calculating Intersection of Plane from Normal-Ray.

Implemented the calculations to find where on the plane the normal is intersecting with it, now I need to test the feed against these calculations and make adjustments based on the physical attributes of the device and the tracked target. Hoping to get that part done before wednesday.



Update: Minor bug in which the points are flipped from the expected positions.  Otherwise working as expected.
Update2:  Upon further inspection it wasn't that the points are flipped but the matrix that I am using is rotated about 180 degrees from what I was expecting.  I will probably need to improve upon how I am acquiring the points of interest before delving deeper into the problem, as my method of acquiring the points has been performing fickely....

Wednesday, May 23, 2012

Week 8 Determining Surface Intersection.

Now that we know which way the object we are tracking, and we are able to capture where it is on a 2d plane, we need to determine where it will intersect our capture surface.  To do this, we need to make two assumptions.

One: We know the measurements of the object we are tracking and are able to convert them into pixel coordinates.

Two: We assume the surface we are to intersect is on the same plane of the source of capture.

My method shall be to then project a ray from the tracked object to the surface to determine location.

---------

Another possibility I can start with is to assume I know our source objects dimensions, or if these dimensions can be configured, I can then determine the distance of the tracked object by having two rays cast when looking at two points, and then using the angle between them and a little bit of trigonometry to determine distance of the source.  From there, the methodology should be the same.

Something like this...:

Sunday, May 13, 2012

Successful calculation of rotation by in non-coplanar points into pose estimator

So I have a functional pose estimator now, and can begin possibly either improving its performance, or moving onto the next part of my project, which would involve getting the physical dimensions of the work environment relative to the user.

Link to a demonstration of it: 
http://youtu.be/5BeG51WrvNw

As well as some images:
(Yaw Test)


(Pitch Test) 



Not sure if I should factor in roll... Face isn't detected that way currently..

Sunday, May 6, 2012

Transitioning to (Non-Coplanar) Posit Approach & beginning analysis of particular system.

Building a non-coplanar posit model for testing purposes.
Dimensions Are: RG: (0,0,5.75") RB: (6", 0", 0") RV: (0", 6", 1.25")


After doing some tests, colored thumbtacks (ping pong balls removed) are currently provide too weak to be detected  properly through thresholding algorithm.

While working on building a better model, perhaps painting pingpong balls with the respective colors, was looking into possible alternative approaches of coplanar algorithm, steming from discussing project with a few students, and considering how to use perspective (comparing relative edges against each other (parallel edges that are farther away will be smaller)), to aid estimation.