We are doing an evaluative integration of the Kudan-CV with our tools and had a few questions Regarding the Camera View for Android.
As suggested in your github sample project, we have switched the camera renderer into openGL which gives a nice boost to FPS. We are now at the stage where we need to pass some data into the Kudan-CV for recognition. We are currently using the following approach.
- We render the Camera Feed into an FBO using a custom shader which computes the luma data required by Kudan.
- We read the data into a byte buffer using JNI using glReadPixels which only reads the GL_RED channel (single channel) from the FBO
- We pass the data into imageTracker->processFrame(pixels, width, height, 1, 0, false).
There are no GL errors, and we can confirm that the read-back from the FBO contains the proper Grayscale data (verified via converting the data into Bitmap and dumping it into screen).
As such, we have a few concerns.
- The performance of glReadPixels is pretty bad. We are looking at PBO’s now but doubt that will give much of a performance increase. We were wondering if there was a method available in KudanCV where it could grab the required data itself from OpenGL?
- processFrame never detects our markers. We have tried a few variations including settings the flipping from false to true to no avail.
We were wondering if you could provide some pointers regarding this. Our IOS implementation went very smooth, however Android is proving to be a little challenging. How do you handle the Camera view on your Kudan SDK?
Thank you for your time!