Using ArbiTrack to track arbitrary "objects" the user selected


#1

Hi,

suppose you have a view in you application that the user can position over an object. E.g. the camera view has an overlay which is a simple rectangle which the user can place over a random image ( image of a certain cat for instance ). Then when the user presses a button the rectangle would be always over the cat, or rather would track it. In the beginning the rectangle does not need to be in the perspective of the image, the idea is the user adjusts it by moving the camera/iPhone.

The tracking is markerless because the tracked object is arbitrary, whatever the user puts the rectangle over.

What I’m looking for is a method which would return how the frame transformed after each process frame call to the tracker so I can use that transformation on my rectangle. I would like to avoid the AR api since I’m only interested in the tracker and I don’t have a place I would like to position my initial node and am not rendering any objects.

Is that possible with the kudanCV API?


#2

You wouldn’t need to update the rectangle, the ArbiTracker would do that for you. All you’d have to do is supply the ArbiTracker with the position and orientation of your rectangle so it knows where the centre of its tracking world starts.

Having said that, you’d still need some form of rendering, because you’re wanting to render a rectangle. If you don’t want to use KudanAR, you should be prepared to implement that yourself using either Metal or OpenGL, or to get some 3rd party rendering package you can use.


#3

Hey @LukeKudan, thanks for your fast reply!

Briefly I just want to point out that the CAShapeLayer iOS rendering system is sufficient for my use case, the way it has been done in your demo iOS app on git ( https://github.com/kudan-eu/KudanCV-iOS-Demo ) by drawing a simple rectangle.

Here’s what I’m still trying to get my head around. I’m going trough the aforementioned code because it’s doing a similar thing I want to accomplish using the ArbiTracker. The difference I’m trying to figure out is that my rectangle is positioned in UIView screen coordinate space, that is how you would typically draw it on screen. While the rectangle in the demo is positioned a bit freely ( method arbitrackAction: in ViewController.mm )

    KudanVector3 startPosition(0,0,200); // in front of the camera
    KudanQuaternion startOrientation(1,0,0,0); // without rotation
    arbiTracker->start(startPosition, startOrientation);

I’m hoping that is what that code is doing.

The problem I can’t get my head around is if my rectangle is positioned square on the screen the whole time and then the user at one point presses the button to start tracking how do I figure out the rectangles position in world coordinates to supply the tracker with.

That’s why I asked if there was a method that would just supply me a homography or affine transform or something similar which would tell me how the frame changed and I could just apply it onto my rectangle.

I was experimenting with openCV 2D features and used to calculate homography between two consecutive frames but the method was too slow for my use, and that’s when I did some more research and stumbled upon your software.

Cheers and thanks


#4

In that case, you should probably look into getting the floor position from the ArbiTracker. That should be the easiest way to achieve what you’re asking for.


#5

Hi again and thank you for your reply,

I took a good look at the documentation ( https://wiki.kudan.eu/apidocs/KudanCVDocs/db/da3/class_kudan_arbi_tracker.html ) and I did not find anything with which I would accomplish what you suggested. Besides, I’m still not sure how that would solve my use case because even if I know the floor position how can I estimate the distance between my target and camera so I can transform the view.

Here ( http://imgur.com/a/RaHoT ) I have uploaded a demo of what I’m trying to accomplish. The user can position the red rectangle view over any abstract object he would like to track ( in the demo it is some random text ). And that view is in iOS UIView coordinate space. When the user presses the start button the tracker which I’m experimenting here uses openCV feature matching and homography estimation which I then just use to transform the rectangle and thereby track the object.

The camera can be in any orientation and the red rectangle is fixed in screen coordinates, i.e. it is always where the camera faces. I’m trying to understand how can I figure out the position of the red rectangle to give to the ArbiTracker such that it would be drawn the same as in UIView coordinate space. The orientation is obviously square with the camera.

I hope you can help me because I’m hoping to implement your software in my product.