The following list of FAQs are related to the Dragonfly engine and the answers applies to the Dragonfly Demo Apps (Android, iOS and Web) and the Dragonfly Java Application.

Requirements

Does Accuware sell monocular or stereoscopic cameras?

No, we do not sell cameras: our technology can work with any monocular camera on the market that has some quite common specifics. For what concerns the stereoscopic cameras: you can build your own stereo camera starting from 2 single monocular cameras. ALL the info about the camera specifics can be found inside this page.

Is an internet connection required to use the Dragonfly engine?

No, an internet connection is not strictly required after the calibration process. In fact it is possible to run the Dragonfly engine directly on a machine/PC with the specifics described in this page.

Does the Dragonfly engine make use of (can be feed with) the information coming from other sensor (IMU or INS)?

No. Currently, we only provide a location based on the camera input and we have no plans to rely on other external sensors given the super accurate results already provided by the camera input.

Features

Does the Dragonfly engine provide way-finding or routing functionalities (e.g. how to get from point A to point B)?

No, the Dragonfly engine provides the location of ta camera, but no routing or way finding functionalities. Think about it like GPS: we provide the coordinates, on top of that you can develop the navigation and way finding using one of the many providers available on the market.

Does the Dragonfly engine provide the orientation  (yaw, pitch and roll)?

Yes we do. You can find more info inside this page.

Are there other floor plan formats that can be used that don’t depend on building schematics or GPS?

Yes. If you are planning to integrate inside your application the maps built with Micello, then you have to:

  1. create a floor plan with Micello.
  2. ask to your Micello account manager to “enable the PNG Image Files and GeoJSON Files for your Micello’s account”. This is absolutely needed in order to import the Micello’s maps inside the Accuware dashboard. Otherwise an error will be returned during the import process! The status of the Micello products active for your account can be checked from this page.
  3. import into the Accuware dashboard the floor plan image built with Micello and available in your Micello account following the steps described in this support page.

Integration

Do you provide any ROS integration for the Dragonfly engine?

At present, we do not provide any ROS integration for the Dragonfly engine. In the current state of the technology, if you’d like to integrate our location engine in your architecture, we provide a C++ SDK and a full REST API which allows to control the Dragonfly Java App either running on a local machine or on a remote server. Anyway Accuware also provides IT consulting services, therefore we can deliver a ROS integration upon request: contact us for more information using this form.

How the Dragonfly engine fits in a typical UAV navigation architecture?

Depending on the computing unit available on-board, we recommend either:

  • local video processing (better latency).
  • or remote processing IF a low-latency low-loss network is available to transmit the video from the UAV to the remote server.

Is it possible to install Dragonfly in a Docker container?

Dragonfly is made of a Java App (with native library support) and a Web UI. So in theoretically it is indeed possible to install it in a Ubuntu Docker container. However, when we made some tests in our lab, we encountered an issue on some systems with the Docker container not being able to connect  to external USB camera. So, feel free to test this configuration and let us know the result.

Positioning process

Why the location is LOST during the navigation session?

The “Lost” status happens because of different reasons:

  1. The environment is “too plain” and it is impossible to map reference points. Think about a white wall.
  2. The camera does a pure rotation movement, on one of the axis. This is a mathematical limit. It is avoided using a stereo camera, or moving in a way that avoids “pure rotations” (pure rotation: think about a drone rotating on itself).
  3. The field of view is limited: this is the typical reason why it happens on smartphones using the Dragonfly Demo App for iOS Android, and that is why we suggest using a wide angle camera in production with the Dragonfly Java App (160-170° on single cam, up to 120° each camera on stereo cam). Unfortunately smartphones have a limited field of view, and this limits the ability to map fluently an environment.

When you get “lost” you should get back to a previously known location.

How should I properly perform the Positioning (Navigation and Mapping) a big environment?

What we would suggest to do is to:

  1. Ensure to close a loop surrounding the considered perimeter.
  2. Then, map some of main aisles while regularly coming back to known places.

If this is done, the positioning is going to be accurate and the drift will be corrected by loop-closing on a regular basis.

Can the Dragonfly engine detect the altitude of the camera from the ground?

Absolutely! If the landmarks are well placed and the calibration of the markers is done properly done properly, the altitude will be accurately provided, so you can know if the device is for example 15 cm from the ground. The closer you are to the object, the better you know the relative camera distance to this object.

How should I perform the mapping of an aisle in which there will be a drone flying at different altitudes?

If you know in advance the trajectory the drone is supposed to take during its regular usage, then you should simply perform the Positioning process by following this exact trajectory with the drone flying slowly and looking exactly to the direction(s) it is going to look in during its regular usage. So, for example, if you know in advance that the drone will fly at 2 different altitudes (e.g. 2 meters and 5 meters) you will have to perform the positioning process twice:

  • one with the drone flying along the trajectory at 2 meters.
  • one with the drone flying along the trajectory at 5 meters.

How far can the objects be detected and become part of the map?

There is no minimum distance as long as the objects can be seen in the image. However, the further the objects are, the less accurate is the triangulation (because the pixels move less from a frame to the other). We would say, safely, for objects further than 30 meters the mapping could be an issue, but honestly it is pretty rare that, in indoor cases, there is not a single object (and thus feature) visible at less than 30 meters.

Accuracy

Can the Dragonfly engine provide an average radius of accuracy greater than + – 10 cm ?

The accuracy of a computer vision system depends not only on the system itself, but also on the surroundings of the camera. With a proper camera calibration and accurate visual (or virtual) markers in the venue, the accuracy is about 10 centimeters in a standard environment (objects at about ~10 cm from the camera). To have a better accuracy, the system would have to run at higher resolution, but currently the additional processing power required will be so huge that, at present, we are not willing to consider this option.

What is the accuracy provided by the Dragonfly engine in an un-mapped area during the Positioning process?

In an un-mapped area (while the Dragonfly Web UI shows NAVIGATION) there is a drift which will accumulate over time. It is difficult to provide an accurate estimate of the accuracy in this situation because it really depends on the venue features, on the motion of the camera and on the quality of the camera calibration. We can say that the drift is high enough in monocular mode to NOT recommend relying to the location provided by the Dragonfly engine in an un-mapped area after a minute of navigation.

Why there is a drift between the real location and the one estimated by the Dragonfly engine?

The drift you are encountering could be due to various factors, as well:

  • A bad camera calibration.
  • A challenging environment where the scale of the map is hard to be kept consistent (ex. a building with a lot of white walls). This is described in one of the FAQs below.
  • A long monocular navigation path for which the drift is accumulated. The drift can be corrected by performing a loop-closing, that we strongly recommend in the monocular mode. So basically, you should navigate inside the building, close a couple of loops, and save the map. Then this map will be used as a basis for navigating your device and other devices.

How much Dragonfly is robust to changes of the environment previously mapped?

The Dragonfly engine is capable of improving the accuracy of the locations computed when used continuously in the same environment. This happens as long as the features of the environment in front of the camera do not change of more that 30%. If the what is presented in front of the camera changes of more than 30% from what has been seen previously there can be 2 situations:

  1. the camera reached this previously known place (which has changed more than 30%) from another place which was properly identified. In this case, no problem, the map will be properly updated.
  2. the camera suddenly sees this previously known place (which has changed more than 30%) and does not have a previous history to recover its path (how it got there). In this case, the Dragonfly engine is not be able to recover its position until the camera sees a place that it can clearly identify. The Dragonfly engine won’t be able to compute any location in the meantime.

How the lighting conditions affect the accuracy of the Dragonfly engine?

The lightning conditions affect the system performances. If the shapes are clearly visible by the camera, and if the contrast is good, the Dragonfly algorithm can works properly. If there is a huge back light making the rest of the scene looking obscure, then the position won’t be available. The algorithm is particularly sensible to back lights.

What are the known environmental conditions where the localization algorithm’s performance is challenged?

Un-textured environment (uni-color walls), environments with back lights, environment where the texture is mostly the same wherever we are (subway tunnels for instance).

How does the Dragonfly algorithm behave when used inside a corridor or aisle?

The fact that the area is narrow will make the system pretty accurate. We would say that it is possible to reach an average radius of accuracy of ~10 cm.

Does the Dragonfly engine provide a score of the reliability or quality of the locations estimated?

We do not provide such a “score” yet, but this is indeed something we should consider doing.

Is there a drift (over time) of the locations estimated if the camera is fixed and looking at the same position?

You can expect a noisy position (about a 5 to 10 cm depending where you look at) there won’t be a drift! The average position is perfectly stable in this situation.

What happens in the eventuality of complete camera occlusion?

No more position until the camera re-identifies a known place. Usually, if we talk about a 1 second occlusion, the system will be able to recover immediately after.

How would the system differentiate between two aisles with no inventory in them?

If there is absolutely no difference between two aisle, then the system will indeed have troubles to re-localize itself. It has basically the same limitations as a human being.

How accurate is the algorithms localization on the Z axis?

The Z axis has the same accuracy as the other axis. About ~10 cm usually.

Calibration

The RTSP stream of my Raspberry Pi is not accessible from the Dragonfly Demo Web App for PC what can I do?

You can perform the calibration by making your RPI a WebRTC server. In this case the connection is established in a P2P way. Please look at the step by step instructions here.