The following list of FAQs are related to the Dragonfly engine and the answers applies to the Dragonfly Demo Apps (Android, iOS and Web) and to the Dragonfly Java Application.

Requirements
Features
Integration
Positioning (mapping and localization)
Accuracy
Calibration

Requirements

Does Accuware sell monocular or stereoscopic cameras?

No, we do not sell cameras. Dragonfly works with any monocular or stereoscopic camera on the market with the specifics described in this page.

Is an internet connection required to use Dragonfly?

No, an internet connection is not strictly required after the camera calibration process. The Dragonfly engine can run on any embedded device/machine/PC with the specifics described in this page.

Does the Dragonfly engine make use of the information coming from other sensor (IMU or INS)?

No, currently we only provide a location based on the camera input and we have no plans to rely on other external sensors given the super accurate results already provided by the camera input.

Features

Does the Dragonfly engine provide way-finding or routing functionalities (e.g. how to get from point A to point B)?

No, the Dragonfly engine provides the location of the camera, but it does NOT provide routing or way finding information. We provide the WGS84 coordinates (latitude and longitude) or metric coordinates (distance in meters from a point of origin). On top of that it is possible to develop the navigation and way finding using one of the many providers available on the market.

Does the Dragonfly engine provide the orientation  (yaw, pitch and roll)?

Yes it does. You can find more information about this topic inside this page.

Are there other floor plan formats that can be used that don’t depend on building schematics?

Yes, you can use for example the floor plans provided by Micello. If you are planning to integrate inside your final application the maps built with Micello then you have to:

  1. create a floor plan with Micello.
  2. ask to your Micello account manager to “enable the PNG Image Files and GeoJSON Files for your Micello’s account”. This is absolutely needed in order to import the Micello’s maps inside the Accuware dashboard. Otherwise an error will be returned during the import process! The status of the Micello products active for your account can be checked from this page.
  3. import into the Accuware dashboard the floor plan image built with Micello and available in your Micello account following the steps described in this support page.

How fast can go a device that is making use of the Dragonfly engine?

We have made several tests and we have been able to get excellent results with devices going at up to 10 Km/h.

Integration

Do you provide any ROS integration for the Dragonfly engine?

At present, we do not provide any ROS integration for the Dragonfly engine. In the current state of the technology, if you’d like to integrate our location engine in your architecture, we provide a C++ SDK and a full REST API which allows to control the Dragonfly Java App either running on a local machine or on a remote server. Anyway Accuware also provides IT consulting services, therefore we can deliver a ROS integration upon request: contact us for more information at this link.

How the Dragonfly engine fits in a typical UAV navigation architecture?

Depending on the computing unit available on-board, we recommend either:

  • local video processing (better latency).
  • or remote processing if a low-latency and low-loss network is available to transmit the video from the UAV to the remote server.

Is it possible to install Dragonfly in a Docker container?

Yes. You can find the instructions inside this page.

Positioning process (mapping and localization)

How big can be the area mapped?

We have customers who have mapped up to 40.000 sqm. Despite this we recommend to limit the area covered by a single map to 15.000 sqm. Anyway the constraints are the RAM and CPU available by the computing unit running the Dragonfly Java App (we are working on reducing these constraints). To provide some additional numbers the mapping at slow speed of a warehouse of 15.000 sqm:

  • generates 160K map-points inside a map file that will have a final weight of about 600 MB.
  • takes about 2 hours to be completed.

Why the location is LOST during the navigation session?

The “Lost” status happens because of different reasons:

  1. The environment is “too plain” and it is impossible to detect enough features (reference points). Think about an area with many white walls all equals each others.
  2. The monocular camera does a pure YAW (like a drone rotating on itself) or PITCH rotations. This is a mathematical limit and it can be overcome by:
    • using a stereo camera.
    • or, with a monocular camera, by doing rotations in conjunction with translations (like a turning car).
  3. The field of view is limited (like on smartphones using the Dragonfly Demo App for iOS and Android). Unfortunately smartphones have a limited field of view, and this limits the ability to map fluently an environment. This is the reason why we suggest using a wide angle camera in production with the Dragonfly Java App (with FOV of 160-170° on monocular cameras and with FOV up to 120°/camera on stereo cameras).

When you get lost you should get back to a previously known location.

How should I properly perform the Positioning (Navigation and Mapping) a big environment?

What we would suggest to do is to:

  1. Ensure to close at least one loop surrounding the considered perimeter.
  2. Then, map the internal area while regularly coming back to known places to close additional loops.

If this is done, the positioning is going to be accurate and the drift will be corrected by loop-closing done automatically by the Dragonfly engine on a regular basis.

How should I properly perform the Positioning (Navigation and Mapping) inside an area made of physically separated sub-areas?

If you don’t need to map the area in between the sub-areas then our recommendation is to create multiple maps, one for each sub-area, and to automatically load the map corresponding to the current sub-area using this API call. You can use for example the GPS info to get the macro location needed to load the correct map. This approach has the advantage of dealing with multiple small maps instead of a big one, which is better for the performances.

Can the Dragonfly engine detect the altitude of the camera from the ground?

Absolutely! If the visual markers are well placed and the calibration of the visual markers (or virtual markers) is done properly, the altitude will be accurately provided, so you can know if the device is for example 15 cm from the ground. The closer you are to the object, the better you know the relative camera distance to this object.

How should I perform the mapping of an aisle in which there will be a drone flying at different altitudes?

If you know in advance the trajectory the drone is supposed to take during its regular usage, then you should simply perform the Positioning process by following this exact trajectory with the drone flying slowly and looking exactly to the direction(s) it is going to look in during its regular usage. So, for example, if you know in advance that the drone will fly at 2 different altitudes (e.g. 2 meters and 5 meters) you will have to perform the positioning process twice:

  • one with the drone flying along the trajectory at 2 meters.
  • one with the drone flying along the trajectory at 5 meters.

How far can the objects be detected and become part of the map?

There is no minimum distance as long as the objects can be seen in the image. However, the further the objects are, the less accurate is the triangulation (because the pixels move less from a frame to the other). We would say, safely, for objects further than 30 meters the mapping could be an issue, but honestly it is pretty rare that, in indoor cases, there is not a single object (and thus feature) visible at less than 30 meters.

Is there a switch from proper time to switch from Positioning (mapping and navigation) to Navigation only?

Normally, in a small venue there is no need to enable the Navigation mode. But if you’d like to do so, it is good to switch when you have the perception that Dragonfly has already mapped the whole venue and that you are able to navigate from nearly any position without getting lost.

Accuracy

Can the Dragonfly engine provide an average radius of accuracy greater than + – 10 cm ?

The accuracy of a computer vision system depends not only on the system itself, but also on the surroundings of the camera. With a proper camera calibration and accurate visual (or virtual) markers in the venue, the accuracy is about 10 centimeters in a standard environment (objects at about ~10 cm from the camera). To have a better accuracy, the system would have to run at higher resolution, but currently the additional processing power required will be so huge that, at present, we are not willing to consider this option.

What is the accuracy provided by the Dragonfly engine in an un-mapped area during the Positioning process?

In an un-mapped area (while the Dragonfly Web UI shows NAVIGATION) there is a drift which will accumulate over time. It is difficult to provide an accurate estimate of the accuracy in this situation because it really depends on the venue features, on the motion of the camera and on the quality of the camera calibration. We can say that the drift is high enough in monocular mode to NOT recommend relying to the location provided by the Dragonfly engine in an un-mapped area after a minute of navigation. More info can be found inside this page.

Why is there an angle between the absolute horizontal plane of the real-world and the horizontal plane computed by Dragonfly?

Without world-references, if your camera is not perfectly horizontally held during the MAP INIT stage, there will be an angle between the absolute horizontal plane of the real-world and the computed horizontal plane of your device shown inside the plot because the Dragonfly engine has no way to know exactly what is the real-world horizon. So the Dragonfly engine assumes that the floor is on the same horizontal axis as the horizontal axis of the camera during the MAP INIT phase.

Why there is a drift between the real location and the one estimated by the Dragonfly engine?

The drift you are encountering could be due to various factors, as well:

  • A bad camera calibration.
  • A challenging environment where the scale of the map is hard to be kept consistent (ex. a building with a lot of white walls). This is described in one of the FAQs below.
  • A long monocular navigation path for which the drift is accumulated. The drift can be corrected by performing a loop-closing, that we strongly recommend in the monocular mode. So basically, you should navigate inside the building, close a couple of loops, and save the map. Then this map will be used as a basis for navigating your device and other devices.

More info can be found inside this page.

How much Dragonfly is robust to changes of the environment previously mapped?

The Dragonfly engine is capable of improving the accuracy of the locations computed when used continuously in the same environment. This happens as long as the features of the environment in front of the camera do not change of more that 30%. If the what is presented in front of the camera changes of more than 30% from what has been seen previously there can be 2 situations:

  1. the camera reached this previously known place (which has changed more than 30%) from another place which was properly identified. In this case, no problem, the map will be properly updated.
  2. the camera suddenly sees this previously known place (which has changed more than 30%) and does not have a previous history to recover its path (how it got there). In this case, the Dragonfly engine is not be able to recover its position until the camera sees a place that it can clearly identify. The Dragonfly engine won’t be able to compute any location in the meantime.

How the lighting conditions affect the accuracy of the Dragonfly engine?

The lightning conditions affect the system performances. If the shapes are clearly visible by the camera, and if the contrast is good, the Dragonfly algorithm can works properly. If there is a huge back light making the rest of the scene looking obscure, then the position won’t be available. The algorithm is particularly sensible to back lights.

What are the known environmental conditions where the localization algorithm’s performance is challenged?

Un-textured environment (uni-color walls), environments with back lights, environment where the texture is mostly the same wherever we are (subway tunnels for instance).

How does the Dragonfly algorithm behave when used inside a corridor or aisle?

The fact that the area is narrow will make the system pretty accurate. We would say that it is possible to reach an average radius of accuracy of ~10 cm.

Does the Dragonfly engine provide a score of the reliability or quality of the locations estimated?

We do not provide such a “score” yet, but this is indeed something we should consider doing.

Is there a drift (over time) of the locations estimated if the camera is fixed and looking at the same position?

You can expect a noisy position (about a 5 to 10 cm depending where you look at) but there won’t be a drift! The average position is perfectly stable in this situation.

What happens in the eventuality of complete camera occlusion?

No more position until the camera re-identifies a known place. Usually, if we talk about a 1 second occlusion, the system will be able to recover immediately after.

How would the system differentiate between two aisles with no inventory in them?

If there is absolutely no difference between two aisle, then the system will indeed have troubles to re-localize itself. It has basically the same limitations as a human being.

How accurate is the algorithms localization on the Z axis?

The Z axis has the same accuracy as the other axis. About ~10 cm usually.

Calibration

The RTSP stream of my Raspberry Pi is not accessible from the Dragonfly Demo Web App for PC what can I do?

You can perform the calibration by making your RPI a WebRTC server. In this case the connection is established in a P2P way. Please look at the step by step instructions here.