The widespread availability of Global Navigation Satellite Systems (GNSS) such as the Global Positioning System (GPS) has been a signifcant contributor to the realisation of autonomous platforms like robots or drones. This is certainly the case for applications in `ideal' or `prepared' environments such as aerial photography, mapping and precision farming. In these environments, the GNSS antenna has an unobstructed view of the sky and consequently enough satellite measurements to compute an optimal estimate of the platform position. Unfortunately, in some circumstances, such as urban or low altitude operations, the GNSS receiver antenna is prone to lose line-of-sight with satellites, mak-ng GNSS unable to deliver high quality position information. In addition, the increasing deployment of autonomous platforms for safety or liability critical applications is driving efforts towards more stringent performance requirements for positioning. Specifically those of accuracy and advanced autonomy to accom- plish difficult missions in harsh environments. To address the increasing demand for enhanced positioning performance in harsh environments, GNSS constellations are being deployed or modernised and designed to be available globally or regionally, these include: The Russian Federation's GLObal NAvigation Satellite System (GLONASS), as well as the European Satellite Navigation System (GALILEO), China's COMPASS/BeiDou, India's Regional Navigation Satellite System (IRNSS) and Japan's Quasi-Zenith Satellite System (QZSS). Unfortunately, the design speciffication for these systems to be either complimentary and/or compatible leaves these multi-GNSS capabilities vulnerable to similar points of failure. In addition, growing requirements for technology sovereignty are creating a new definition for autonomous positioning systems, and independent sensors and systems are considered mandatory. The challenge to improve localization performance can be achieved by combining measurements from different sensors. The design and development of integrated localisation systems is used by autonomous vehicles to take advantage of the complementary attributes of two or more sensors, which yield solutions that are more accurate and reliable. For example, inertial navigation sensors (INS) are able to deliver the position, velocity and attitude of a platform with high accuracy over the short term with relatively low noise but tend to drift over time. In contrast, GNSS provides real-time, three-dimensional position and velocity information with only random errors that do not grow unbounded. Robust location determination of teams of mobile platforms, based around integration of sensors and measurements, is a heavily contested research space. In the case of drones, over the past decade, much of this effort has concentrated on adapting platform hardware and sensor technology, with little impact on the fundamental localisation problem. In fact, GPS is still promoted as the primary capability for current generation, commercially available drones operating in outdoor environments. In indoor spaces, infrastructure tethered approaches are used to localise them and are therefore not a ubiquitous capability. In parallel with hardware developments, algorithmic research has demonstrated successful approaches based on simulation studies or led to the prevalence of solutions that appear simple and e cient, but which lack the mathematical rigor to balance performance and cost for increasingly complex operational scenarios. It is therefore evident that the challenge to deliver a truly resilient and autonomous location capability in GPS/RF contested environments remains outstanding. Teams of drones tasked with surveillance and other missions, need to be able to accurately and securely determine their locations and maintain communications between each other - in a timely manner, and frequently in environments that are RF contested or which, for a number of reasons, demand a truly autonomous on-board positioning capability. Acknowledging location as key to the successful execution of these missions, the challenge remains to deliver this capability under constraints of highly dynamic platforms, scalable spatial extents with temporally varying numbers of mobile nodes, heterogeneous sensors with multiple failure modes, interference etc. Our thesis is: whilst it is true that individual nodes in a team or network of drones can be fittted with sensors that enable it to determine its individual location, by sharing measurements and other relevant information amongst all the nodes - the so called cooperative positioning (CP) or cooperative localisation (CL) approach - we optimise over these and other constraints. CP based on accurately characterising sensor and signal performance is key and unique to how we propose to achieve a fully autonomous positioning solution. In our approach, we can achieve a better balance between performance (positioning metrics and operational effiency) and cost (financial and computational). In this paper we report on the following tasks: The mathematical models for cooperative positioning and sensor fusion in prepared and unprepared environments; Descriptions for the error models determined for the sensors existing and retrofitted to a team of drones and; Full descriptions of the performance metrics of the sensors, platforms, test and experimental configurations, analyses and outcomes.