Is the demo not working properly or is the implementation used wrong?

0 votes
asked Feb 22, 2016 by v.houbraken (140 points)
edited Feb 22, 2016 by v.houbraken
We really need a good 3D recognition tool since metaio dropped unity/android support.

I have installed the tracking demos. Altered nothing, started testing.

Device: Samsung S5

Unity: 5.3.0f4

So i started using the planar scene. First of all the documentation is scattered through the project, and it took me a bit to find the proper settings for the project to even run and show me the camera.

Then i got it started, the loading time is terribly long, i heard it can be fixed by caching. But if so, why is this not enabled by default? (Or why is it not pre-compiled in the editor?)

Then the framerate is very low. The camera is responding slowly. I know it needs some analytics time, so i can expect as such. What i do not accept is a camera that isn't v-synced at all. It just cracks and lags over the screen, doesn't use the build in focusing of my camera and has a framerate well under 20. Even if this is just what the app sees compared to what i could render through unity myself, if it is this bad, it would ruin any form of scanning the image before it even begins.

The moment a tracking item is being initialized or is lost, the video visually lags/stops for a second. This happens every time it is doing that.

The camera image is squashed. round objects appear stretched in portrait view, and become squashed in horizontal view, it makes no sense to render fullscreen if the camera image is a lie/scewed view of reality. (The fact that this isn't the other way around makes this even more confusing!)

Then we finally get to scan something, but it took well over 30 seconds to get there, and thus my screen turned off to save power. Yes i can turn that feature off in unity, but i generally do not like to turn that off since it is a user choice, not a developers. So i turn my screen back on, And the app just dissapeared. As if it was never even running at all? How do you expect that to work in an app where this feature is a part of? I check something in AR view, put the device down to do something to whatever i am scanning, pick the device back up, and i have to start the app all over again and get back to my AR view and such. Now what?

And then we finally have the first planar view scanned (on paper, because it gets confused by a computer screen because it is to small and autofocus isn't working) And then it scans it scewed. It doesn't put a nice green box over the paper outline, no it just shows me a 10 degrees scewed box that i can't get projected right even if i wanted to.

Better yet, if i scan just a part of the paper, we suddenly get a whole other orientation with some grey background dots. Telling me some parts of my white desk are appearently "recognizable points" that it is tracking. What is going on here, is this intended for that image to be trackable in multiple ways?

So long story short, as my first experience this was horrible. I know there are a ton of configurations i can do. But frankly i can't be bothered to try them all out to get your demo to work properly for me. I tried reading into it, see different "Feature detectors" which appearently take some parameters, but nobody bothered to put down what parameters are taken by which, and what default parameters are for these and how they affect the scans in any way. So i just dropped that there.

Can my experience be salvaged? Am i missing something in my process that caused most of these issues and can they be fixed? I hope so. Please make this tool a viable solution for us by providing answers.

1 Answer

0 votes
answered Feb 22, 2016 by Alex (6,680 points)
Hi Vince,

you'are absolutely right about the documentation, we are collecting feedback from our users and we are going to improve it also by adding some video tutorials. Until that, the intended way to use the documentation is to first have a look at the README that you will find in the Scripts folder and then to the one available in the Examples folder. We also provided an example for each kind of tracker you can use, that should help to provide a starting point that you can customize for your needs (every example is kept as simple as possible so it should be easily understandable). Finally the documentation that is available online is more related to the configuration of the tracker itself (there you will find the description of tracking parameters and the process involved in the creation of targets).

 

We will add pre-computed cache in the next update (later this week, or beginning of next one). This will, at least, make the default examples faster to load, but if you will modify tracking parameters you will need to re-generate the cache (this is the reason for not having provided the cache so far). Also, talking about Android we know about a bug that makes the loading of tracking data quite slow on some devices, we are also wokring on that and eventually this problem should be fixed with the next update too.

 

Talking about framerate, according to the test we did, we did not noticed a huge difference with other SDKs (frankly speaking, in some circumstances, " others' " perfomances were worst than ours...) nonetheless, it is undoubtly true that you may experience those issues you have described. Unfortunately, given the extremely high number of differences among Android devices (especially in terms of HW, cameras and OpenGLES implementations), it is not easy to find a solution that works for everybody but we keep doing our best to fix problems and improve performances especially when we are asked to. One quick note: we released an update few days ago that fixed several problems related to the camera on Android, may I ask you when you downloaded the plugin exactly?

 

The issue with squashed/stretched image is not normal at all. To my knowledge, in the past we had a bug related to that, but is was fixed, I will double check that it is really so.

 

The fact that the screen turns off is something that the developer can/should decide upon (using something like: Screen.sleepTimeout = SleepTimeout.NeverSleep;) what is wrong is the fact that when the app goes in background and then the user brings it back it should not re-start. Even though the plugin supports the latest version of Unity, it could be the case that something changed "under-the-hood" and it is preventing the plugin to work as expected. We will investigate this issue further and eventually fix the problem.

 

Regarding the tests with the Planar Tracker example, were you using the default target image? By the way, we really had no issues when we tried both the Object tracker and the Planar tracker using the computer's screen instead of printed images, unless you don't have very contrasted areas your monitor or unless you get so close to the screen that you see almost individual pixels, then you should not have problems. On the other hand, if you are using printed images, be absolutely sure that when you send the images to the printer, you did not choose to re-scale/fit the image to the sheet's size, many times it happens that the image get stretched a little and this can fool the tracker that will provide a wrong pose (hence a wrong green reference rectangle) - in other words double check that the aspect ratio is not modified. Also, when you scan just a portion of the target image, depending on the number of features that are found, you may get wrong results, to reduce this 'false' positives you could act on the configuration file).

 

Talking about all the available parameters that can be customized, specifically with reference to features descriptors and detectors, we decided to provide maximum flexibility to the experienced developer that can really customize the tracker using established computer vision techniques (no other SDK provides you the flexibility to experiment wiht all available descriptors/detectors, this is an advanced feature that requires some experience with Computer Vision and specifically with OpenCV). To this regards, the documentation one should have in mind is the one related to OpenCV itself. Nonetheless, it should not be strictly required to go deep to that level of detail - unless you want to -  but, usually, what a developer should do is to modify a few numeric parameters like the number of features used for initialization and/or during tracking, the search resolution and so on...

 

That said, a review of the documentation is surely something we will do, we will try to also provide some 'pre-sets' for most common situations if this may help (especially for planar tracking scenarios). Meanwhile do not hesitate to ask for any doubt or information you may need.

 

Best regards.

Alex
Welcome to Developer Portal Q&A, where you can ask questions and receive answers from other members of the community.
...