Running general quantum algorithms on quantum computers is hard, especially in the early stage of development of the quantum computer that we are in today. Many resources are required to transform a general problem to be run on a quantum computer, for instance to satisfy the topology constraints of the quantum hardware. Furthermore, quantum computers need to operate at temperatures close to absolute zero, and hence resources are required to keep the quantum hardware at that level. Therefore, simulating small instances of a quantum algorithm is often preferred over running it on actual quantum hardware. This is both cheaper and gives debugging capabilities which are unavailable on actual quantum hardware, such as the evaluation of the full quantum state, at intermediate points in the algorithm as well as at the end of the algorithm. By simulating small instances of quantum algorithms, the quantum algorithm can be checked for errors and be debugged before implementing and running it on actual quantum hardware for larger instances. There are multiple initiatives to create quantum simulators and while looking alike, there are difference among them. In this work we compare seven often used quantum simulators offered by various parties by implementing the Shor-code, an error-correcting technique. The Shor-code can detect and correct all single qubit errors in a quantum circuit. For most multi-qubit errors, correct detection and correction is not possible. We compare the seven quantum simulators on different aspects, such as how easy it is to implement the Shor-code, what its capabilities are regarding translation to actual quantum hardware and what the possibilities of simulating noise are. We also discuss aspects such as topology restrictions and the programming interface.
Algorithms for the detection and tracking of (moving) objects can be combined into a system that automatically extracts relevant events from a large amount of video data. Such a system (data pipeline), can be particularly useful in video surveillance applications, notably to support analysts in retrieving information from hours of video while working under strict time constraints. Such data pipelines entail all sort of uncertainties, however, which can lead to erroneous detections being presented to the analyst. In this paper we present a novel method to attribute a confidence of correct detection to the output of a computer vision data pipeline. The method relies on a datadriven approach. A machine learning-based classifier is built to separate correct from erroneous detections. It is trained on features extracted from the pipeline; The features relate to both raw data properties, such as image quality, and to video content properties, such as detection characteristics. The validation of the results is done using two full motion video datasets from airborne platforms; the first being of the same type (same context) as the training set, the second being of a different type (new context). We conclude that the result of this classifier could be used to build a confidence of correct detection, separating the True Positives from the False Positives. This confidence can furthermore be used to prioritize the detections in order of reliability. This study concludes by identifying additional work measures needed to improve the robustness of the method.