Master thesis Development of the MRVR Robotic Dummy User: An Omnidirectional Robot Emulating Human Movements in an Interactive VR Environment carried out for the purpose of obtaining the academic degree Master of Science submitted to the Technischen Universität Wien- Faculty of Mechanical and Industrial Engineering under the direction of: Univ.-Prof Mag.-Rer.-Nat Dr.-Tech Hannes Kaufmann (Inst. Visual Computing Human-Centered Technology) (Research Group: Virtual and Augmented Reality) and Univ.Ass. Dipl.-Ing. Soroosh Mortezapoor (Inst. Visual Computing Human-Centered Technology) (Research Group: Virtual and Augmented Reality) and Projektass.(FWF). Mohammad Ghazanfari, Msc (Inst. Visual Computing Human-Centered Technology) (Research Group: Virtual and Augmented Reality) From Mr.Siddesh Bramarambika Shankar, BSc. Matriculation number: 12329513 Sonnenallee 105 Haus 2/2B 1220, Wien Wien, 26.06.2024 Siddesh Shankar I have taken note that I have been granted permission to print my work under the title Development of the MRVR Robotic Dummy User: An Omnidirectional Robot Emulating Human Movements in an Interactive VR Environment only with the approval of the examination board. I further declare under oath that I have independently completed my thesis in accordance with the recognized principles for scientific treatises and have named all the resources used, in particular the literature on which it is based. I further declare that I have not yet submitted this thesis topic in any form as an examination paper either at home or abroad (to an assessor for assessment) and that this work corresponds to the work assessed by the assessor. Wien, 26.06.2024 Siddesh Shankar Abstract The emergence of collaborative robotic systems has led to increasingly close interactions between robots and human operators, raising concerns about safety—especially within systems like interactive virtual reality (VR) environments where physical boundaries are less perceptible. The ability of collaborative systems to interact closely and safely with humans is a critical factor in their advancement. Therefore, establishing effective development methodologies that ensure safety and enable reliable testing is essential for their successful deployment. This study aims to develop a platform for the safe and reli- able testing of collaborative mobile robotic systems that must dynamically adjust their position based on their proximity to human collaborators to ensure safe interaction. Spe- cifically, the study focuses on developing a robotic system capable of reliably replicating complex human trajectories using holonomic mobile robots, thereby providing a safe and controlled platform for repeated testing. The study explores a foundational approaches to achieve the desired trajectory emulation using Robot Operating System and its nav- igation Stack. The method’s practical limitations, particularly in accurately replicating time-sensitive trajectories with the holonomic robot and providing easy configuration for diverse motion patterns is discussed. Recognizing and learning from these limitations, the study progresses to develop a refined trajectory emulation framework composed of custom ROS-compatible modules designed to closely replicate human trajectories with high fidelity. Comprehensive evaluations conducted in both simulated and real-world environments demonstrate the effectiveness and robustness of the proposed framework. In simulation, the system achieved a mean positional error on the order of 1 × 10−5 m, with latency consistently maintained below 100ms. In real-world tests, the framework maintained a mean positional error in the order of 1 × 10−3 m, also with latency below 100ms. These results validate the system’s performance while also highlighting areas for potential improvement in real-world deployment conditions. The current study provides a solid foundation for the development of a safe and reliable testing platform, facilitating the advancement of intelligent and collaborative robotic systems. Contents 1. Introduction 4 1.1. Problem Statement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 1.2. Expected Outcome . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 1.2.1. Research Question . . . . . . . . . . . . . . . . . . . . . . . . . . 6 1.3. Methods and Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 1.4. Structure of Written Document . . . . . . . . . . . . . . . . . . . . . . . 7 2. Theoretical Foundation 10 2.1. Introduction of Robots in Virtual Reality . . . . . . . . . . . . . . . . . . 10 2.2. Mobile Robotics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 2.2.1. Mathematical Foundation . . . . . . . . . . . . . . . . . . . . . . 12 2.2.2. Overview of Wheel Types . . . . . . . . . . . . . . . . . . . . . . 13 2.2.3. Localization in Mobile Robotics . . . . . . . . . . . . . . . . . . . 14 2.2.4. Theoretical Framework for Robotic Motion . . . . . . . . . . . . . 14 2.3. Robot Operating System Robot Operating System (ROS) . . . . . . . . . 16 3. Literature Review 19 3.1. Systematic Literature Analysis . . . . . . . . . . . . . . . . . . . . . . . . 19 3.2. Localization Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 3.3. Tracking and Emulation Algorithms . . . . . . . . . . . . . . . . . . . . . 21 4. Methodology 25 4.1. Research Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 4.2. Data Collection Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 4.3. Evaluation and Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 4.3.1. Experimental Setup and Protocols . . . . . . . . . . . . . . . . . 27 5. Implementation 31 5.1. Requirement Elicitation . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 5.2. System Selection and Design Rationale . . . . . . . . . . . . . . . . . . . 32 5.2.1. Selection of the Robotic Platform . . . . . . . . . . . . . . . . . . 32 5.2.2. Selection of Hardware Components . . . . . . . . . . . . . . . . . 33 5.2.3. Selection of the Software Ecosystem for Robot Control . . . . . . 35 5.2.4. Selection of the Localization System . . . . . . . . . . . . . . . . 36 5.2.5. Selection of Navigation Framework for Precise Trajectory Emula- tion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 5.3. Implementation and System Setup . . . . . . . . . . . . . . . . . . . . . 38 5.3.1. Design and Structural Enhancements on the Robot . . . . . . . . 39 5.3.2. ROS Development Environment Setup . . . . . . . . . . . . . . . 40 1 Contents 5.3.3. Implementation of Localization System: . . . . . . . . . . . . . . 41 5.3.4. Implementation of the Navigation Stack . . . . . . . . . . . . . . 42 5.3.5. Implementation of the Preliminary Trajectory Emulation Framework 44 5.4. Analysis of Preliminary Trajectory Emulation Framework’s Performance Discrepancies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 5.4.1. Analysis of Emulation Algorithm’s Performance Discrepancies . . 46 5.4.2. Analysis of Localization Performance Discrepancies . . . . . . . . 46 5.5. Implementation of Refined System . . . . . . . . . . . . . . . . . . . . . 47 5.5.1. Implementation of Qualisys (mocap) based Localization System . 48 5.5.2. Development and Implementation of ‘Dummypath_planner’ . . . 49 5.5.3. Development and Implementation of ‘Dummy_local_planner’ . . 50 6. Results 54 6.1. Theoratical Experimentation and Results: . . . . . . . . . . . . . . . . . 54 6.1.1. Preliminary Emulation Framework using Simulated User trajectory: 54 6.1.2. Dummy_Trajectory_Emulation Framework using Simulated User Trajectory: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 6.1.3. Dummy_Trajectory_Emulation Framework using Human VR User trajectory: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 6.2. Practical Experimentation and Results: . . . . . . . . . . . . . . . . . . . 61 6.2.1. Human VR User trajectory: . . . . . . . . . . . . . . . . . . . . . 62 7. Discussion 65 8. Conclusion 70 List of Figures 72 List of Tables 74 Acronyms 75 Appendix 77 A. Appendix 77 A.1. TebLocalPlannerROS Configuration . . . . . . . . . . . . . . . . . . . . . 77 A.2. GlobalPlanner Configuration . . . . . . . . . . . . . . . . . . . . . . . . . 78 A.3. Statistical Evaluation the Dummy_Emulation_Framework’s Performance in simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81 A.4. Statistical Evaluation the Dummy_Emulation_Framework’s Performance on the physical robot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84 Bibliography 87 2 Chapter 1 Introduction 1.1 Problem Statement In the field of interactive Virtual Reality (VR), various sensory stimuli are utilized to enhance the amount of immersion and interactivity with the system, including visual, auditory, olfactory, and haptic feedback. Among these, visual and auditory feedback are well-developed and have gained substantial research attention. In contrast, stim- uli associated with touch and smell remain challenging to implement effectively. To address this gap, numerous studies have been conducted to explore methods for provid- ing haptic feedback in VR environments. Notable early efforts include the PHANToM arm in 1994 [1], the Impulse Engine in 1995 [2], and tactile feedback interfaces such as the ‘Touch Master’ in 1993 and the ‘CyberTouch’ glove in 1995. While these systems provide valuable haptic solutions, they are often inadequate in scenarios where the user is allowed to move freely in the environment, particularly for delivering force feedback to the user. Providing effective haptic feedback in dynamic VR environments continues to present significant challenges. In response, a dedicated field of research has emerged: Encountered Type Haptic Devices (ETHD), introduced by Hirota and Hirose [3] and further developed by Yokokohji et al. [4]. ETHD systems typically employ a robot to position a haptic interface at a desired location, enabling users to interact with it as they encounter it. Recent studies have explored various robotic implementations to advance this concept. One notable development in this area is the CoboDeck project [5], undertaken by the Virtual and Augmented Reality group at TU Vienna. The CoboDeck is an immersive VR haptic system offering free walking support and utilizing a collaborative mobile robot to deliver encountered-type haptic feedback. This innovative research aims to advance collaborative mobile manipulator technologies, bridging the gap between the virtual and physical worlds. The CoboDeck project represents a significant advancement in the field of ETHD. However, it remains in the proof-of-concept stage, and further development is required to demonstrate its safe and effective integration into the actual VR environment. A critical step in this process is ensuring that the mobile manipulator system can operate safely alongside VR users. To evaluate the functionality of CoboDeck’s collaborative mobile 4 1. Introduction manipulator, VR user interaction with the mobile manipulator is to be thoroughly stud- ied and adapted. However, this testing process is resource-intensive and poses potential risks of injury to the VR user interacting with the system. To mitigate these risks and ensure thorough evaluation, alternative methods for configuring the mobile manipulator system are necessary. One proposed solution involves the use of a dummy user, ie., a mobile robot emu- lating the trajectory of an actual human user. This approach allows the CoboDeck system to be tested and configured for safe operation without introducing actual hu- man users during early-stage evaluations [6]. Similar studies have explored the use of mobile robots to replicate human locomotion for research purposes [7]. In the proposed solution, a holonomic mobile robot equipped with a dummy manikin which serves as the dummy user replicates the movements of a VR user, enabling comprehensive test- ing of the interactions between the mobile manipulator and the dummy user in the VR environment. This method facilitates an in-depth evaluation and configuration of the CoboDeck system’s functionality, prioritizing safety while reducing the complexity and risks associated with direct human interaction during the testing phase. Developing a robotic system capable of effectively emulating a dummy VR user requires addressing several critical challenges and conducting thorough research to in- form the selection of appropriate hardware, software, and methodologies. While several trajectory emulating methods have been explored in prior studies [8], [9], their im- plementation on physical robots—particularly in complex obsticle dense environments remains limited. Bridging this gap is essential for creating a robotic system that can reliably and accurately emulate the desired behavior in real-world scenarios. A critical aspect of this process is the selection and integration of various systems essential for the robot’s operation. This includes identifying a robust localization method, which is vital for achieving accurate trajectory emulating and maintaining precise positional accuracy in complex obstacle-rich environments. Equally important is the incorporation of appro- priate sensors to establish an effective navigation stack capable of real-time adaptability to dynamic and changing surroundings. A frequently overlooked challenge in the field is the absence of standardized method- ologies for implementing trajectory emulation or tracking techniques. While prior studies have proposed effective tracking algorithms, they often lack a cohesive, modular, and reproducible framework. Moreover, accurately replicating complex trajectories—such as those representing human movement—poses significant challenges, necessitating careful consideration of the robotic system’s accuracy, efficiency, and stability to ensure pre- cise trajectory adherence. To address this, the present research adopts a ROS-based approach, offering a structured and standardized methodology for developing trajectory emulation solutions using widely supported tools within the robotics community. 5 1. Introduction 1.2 Expected Outcome This research aims to systematically investigate key areas such as localization, sens- ing, and trajectory emulating to develop a comprehensive robotic solution that advances capabilities in dynamic and interactive VR environments. The objective is to replicate VR users’ trajectories with adaptable control methodologies and dependable hardware. Building on these goals, the anticipated outcome of this research is the development of a robotic trajectory emulating system that operates on standardized middleware and per- forms effectively in a physical robotic setup, accurately emulating a VR user and their movements. By delving into advancements in localization strategies, sensor integration, and trajectory emulating methodologies, this study seeks to deliver a solution that pri- oritizes speed, precision, and adaptability. This work represents a foundational step toward realizing a robotic system capable of accurately and reliably replicating VR user trajectories, contributing to the advancement of robotics in immersive and interactive VR scenarios. The results of this research will be analyzed through a structured experimental framework designed to evaluate the robotic system’s performance in terms of positional and temporal accuracy. Improvements achieved through the implementation of newly developed algorithms will be documented and discussed. The evaluation will begin with simulation-based visual assessments to refine the selected methodologies for sensing, navigation, and trajectory emulating. This phase will allow for iterative adjustments before transitioning to physical implementation. Following the simulation phase, the robotic system will undergo a series of controlled trials on a physical testbed. During these trials, the system will be tasked with following predefined trajectories that emulate a VR dummy user’s movements generated in Unity, as well as trajectories recorded from actual VR users. Key performance metrics, including deviation from the desired trajectory, and time taken to reach specific waypoints, will be recorded and analyzed. Furthermore, different localization systems will be evaluated based on the observed performance, and their shortcomings and advantages of the methods will be analyzed and discussed. 1.2.1 Research Question • How can a trajectory emulation algorithm be systematically developed within the Robot Operating System (ROS) framework to accurately replicate user-generated motion using a holonomic robot, while preserving critical spatial and temporal characteristics of the original trajectory? 1.3 Methods and Approach The approach adopted in this thesis integrates the system development methodo- logy proposed by Nunamaker et al. [10] with the research framework outlined by Kostas et al. [11]. The process begins with a comprehensive literature review to evaluate ex- 6 1. Introduction isting tools and methodologies for developing a robotic system capable of emulating complex trajectories. Based on this review, the most suitable tools are analyzed, and new algorithms tailored to the system’s requirements are developed, configured, and implemented in a simulated environment. This simulated implementation undergoes it- erative refinement to enhance system performance. The refined system is then visually analyzed in the simulated environment to evaluate its accuracy and effectiveness before transitioning to physical implementation on the robot. Any limitations or challenges encountered during this phase are documented, and appropriate solutions are either im- plemented or proposed, prompting further exploration and development as needed to address these issues. The final implementation is tested against a set of predefined tra- jectories to ensure it aligns with the intended objectives. The outcomes of this research are thoroughly documented, analyzed, and discussed, with detailed recommendations provided for future advancements and potential areas of improvement. The methodology adopted in this thesis for developing the robotic system to emulate VR users’ trajectories is broadly divided into five key steps, each explained in detail in Chapter 4 1. Tools Assessment: A comprehensive literature review is conducted to identify and evaluate the most effective trajectory emulating algorithms, as well as the hardware and software tools required for the development of the robotic system. 2. Simulation and Optimization: The tools selected from the literature review are implemented within a simulated environment and optimized to meet the system’s re- quirements. The simulated performance is visually compared against a set of predefined sample trajectories to ensure alignment with the desired outcomes. 3. Development and Implementation of Custom Tools: To address any limitations identified in the selected tools or algorithms, new custom tools are developed and fine-tuned to better meet the system’s specific requirements. 4. Physical System Integration: The refined tools are configured and integrated into the physical robotic system. The robot’s performance is then tested and recorded under real-world conditions. 5. Evaluation and Documentation: The final system is evaluated by compar- ing its performance against the expected trajectory to verify accuracy. The findings are thoroughly analyzed and documented, with detailed discussions provided to inform future improvements and further development. 1.4 Structure of Written Document The written thesis is structured to comprehensively address the research questions and to support the explanation of the steps taken in the development of a robotic system. Below is an overview of the main sections covered in each chapter: 1 Introduction: This chapter introduces the thesis, outlining the problem 7 1. Introduction statement, expected outcomes, and methodological approach, along with an over- view of the thesis structure. 2 Theoretical foundation: This chapter explores the theoretical back- ground of the research areas addressed, providing a foundation for understanding the concepts and principles underpinning the study. 3 Literature Review: This chapter provides insights on the current tech- nological advancements in the related areas, developments, and gaps in the field, establishing the foundation for the research’s novelty and relevance. 4 Methodology: The methodology chapter details the systematic approach taken in developing the robotic system. This section provides a comprehensive explanation to ensure the study’s reliability and validity. 6 Results: This chapter presents the findings of the study, including data and insights obtained through the research methods used. 7 Discussion: The discussion chapter interprets the results, examining their implications, relevance to the research questions, and alignment with existing lit- erature. 8 Conclusion: The conclusion summarizes the main findings and contribu- tions of the research, highlighting its significance. It also suggests directions for future research, building on the outcomes of this study. Each chapter builds on the previous one to provide a cohesive exploration and docu- mentation of the study’s objectives, methods, results, and contributions. 8 Chapter 2 Theoretical Foundation 2.1 Introduction of Robots in Virtual Reality VR is a technology that makes use of computer modeling and simulation to create an artificial three-dimensional (3D) environment, and enabling users to interact with it through immersive sensory experiences. VR applications simulate reality by immersing users in computer-generated environments that respond to interactive devices such as goggles, headsets, gloves, and body suits, which transmit and receive sensory informa- tion. The evolution of VR technology can be traced back to the early 19th century with innovations like the stereoscope, invented by Sir Charles Wheatstone in 1838. This concept laid the foundation for immersive viewing experiences and was later expanded in the early 20th century through stereoscopic theater. The 1960s saw significant ad- vancements in VR technology with Morton Heilig’s Sensorama [12], which simulated environments using multi-sensory inputs, including sight, sound, vibration, and smell, propelling VR toward fully immersive engagement. In 1968, Ivan Sutherland intro- duced the first Head Mounted Display (HMD), known as the ‘Sword of Damocles’ [13], marking a pivotal moment in VR history by incorporating head-tracking for visual im- mersion. The 1990s marked a commercial push for VR with systems like the Virtuality Group’s arcade machines, which featured HMDs, gloves, and joystick controls. How- ever, limited processing power and high costs during this period hindered widespread adoption. The 2010s marked the resurgence of VR technology, fueled by advancements in affordable high-performance computing. The release of the ‘Oculus Rift’ [14], along- side innovations by companies such as HTC, Sony, and Valve, revolutionized VR with high-resolution graphics, precise motion tracking, and increased accessibility. These developments transformed VR into a mainstream technology with applications across gaming, training, and research fields, driving its widespread adoption. With all these diverse applications being introduced, the integration of robotics with VR offered transformative opportunities for designing, testing, and enhancing both the systems through synergistic approaches. Since the 1990s, researchers have explored this combination, with significant studies utilizing VR as a simulated training environment for robots. For example, Kirsch et al. [15] employed VR to create dynamic, controlled 10 2. Theoretical Foundation environments where robots could autonomously learn, respond to virtual stimuli, and optimize task-based decision-making processes. This VR-based methodology facilitates complex robot training while minimizing the risks and constraints associated with phys- ical experimentation, thereby accelerating the development of adaptive behaviors and advanced control mechanisms. Increasingly, VR applications in robotics have expanded to include areas such as robotic cell design [16], remote control of mobile robots and tele-operation, and enhancing Human Robot Interaction (HRI) by mirroring real-world environments within VR [17]. Additionally, VR has shown to improve collaborative task performance in HRI by providing realistic visual cues and stereoscopic displays [18]. In the domain of Mixed Reality (MR), systems like the one proposed by Xie et al. [19] in- tegrate real and virtual robots with humans within virtual environments. These systems enable the tracking and control of real robots in physical spaces while allowing users to navigate larger virtual spaces, further bridging the gap between virtual and physical environments. The integration of robotic systems in VR has been shown to enhance the delivery of sensory feedback to users. The current study aims to facilitate the development of one such system, specifically, the CoboDeck project 2.1, by providing a stable and reliable testing platform to support system refinements. A visual representation of the CoboDeck system. Figure 2.1.: CoboDeck: (a) User interacting with a virtual wall (b) Mobile cobot presents prop to provide haptic feedback. 2.2 Mobile Robotics A mobile robot is a machine designed to move and perform specific tasks within its environment. Classified as a subfield of robotics engineering, mobile robots differ from fixed robots, which are typically stationary and consist of a jointed manipulator and end-effector. Unlike their fixed counterparts, mobile robots possess the capabil- 11 2. Theoretical Foundation ity to navigate freely within their surroundings. Mobile robots can operate in various modes depending on their level of autonomy. Autonomous Mobile Robot (AMR)s are fully autonomous systems capable of navigating unstructured environments without the need for physical or electromechanical guidance [20]. Alternatively, some mobile robots rely on guidance systems to follow predefined routes, making them better suited for structured and controlled environments. These distinctions highlight the versatility and adaptability of mobile robots across diverse applications. This section provides some theoretical foundation to the field of robotics. 2.2.1 Mathematical Foundation a. Frame of Reference: Frames of reference are foundational to robotic systems, as they establish the basis for representing the position and orientation of rigid bodies in space. Typically defined by an origin and a set of orthogonal axes within a three- dimensional Cartesian coordinate system (x,y,z), these frames enable robots to interpret and manage spatial information relative to their own structure or to other objects within their environment. Figure 2.2.: Reference frames of different rigid bodies b. Homogeneous Transformation Matrices Homogeneous transformation matrices are a fundamental mathematical tool in mobile robotics, enabling the seamless repres- entation of the amount of rotations and translations within a unified framework. Rep- resented as 4x4 matrices, they integrate a 3x3 rotation matrix with a 3x1 translation vector. These matrices facilitate the transformation of coordinates between different frames, thereby simplifying complex calculations involving determining relative position and orientation between different frames of reference. This matrix-based approach offers a robust framework for manipulating objects in three-dimensional space and provides a scalable solution for managing complex robotic configurations. The transformation mat- rix presented in equation (2.2.1) illustrates the transformation from the station frame to the goal frame, as depicted in Figure 2.2. 12 2. Theoretical Foundation m b T =  m b R mtb,org. 0⃗ 1 m b T =  m b Rxx m b Rxy m b Rxz mtb,x m b Ryx m b Ryy m b Ryz mtb,y m b Rzx m b Rzy m b Rzz mtb,z 0 0 0 1  c. Forward and Inverse kinematics Forward kinematics refers to the process of determining a robot’s overall position and orientation in its environment based on the known states of its joints or wheels. For instance, in the case of a differential drive robot, forward kinematics utilizes the rotational information of each wheel to compute the robot’s pose, including its position and heading, within the global reference frame. Inverse kinematics, on the other hand, involves calculating the specific joint config- urations or wheel movements necessary for the robot to achieve a desired position and orientation within the environment. For example, if a mobile robot is tasked with reach- ing a target point while maintaining a specific heading, inverse kinematics determines the required wheel speeds or joint angles to guide the robot to the specified pose. 2.2.2 Overview of Wheel Types In mobile robotics, the choice of wheel type significantly impacts the robot’s man- euverability and control. Common types of wheels include; standard wheels (a), caster wheels (b), swedish (Mecanum) wheels (c), and ball/spherical wheels (d). Each type of wheel imposes different rolling and sliding constraints on the robot, resulting in different kinematic behavior. The figure below 2.3 illustrates the various types of wheel options. Figure 2.3.: Overview of Wheel types in Mobile robotics 13 2. Theoretical Foundation Standard wheels, for example, allow movement in the forward and backward direc- tions but prevent lateral sliding, providing stability but limited maneuverability. Caster wheels can swivel, allowing for smoother turns but are harder to control precisely. Swedish or Mecanum wheels have rollers angled on the circumference, allowing move- ment in any direction by adjusting the speed and direction of each wheel, thus providing omnidirectional motion with fewer sliding constraints. These are the wheels that are employed on the robot in this current study. 2.2.3 Localization in Mobile Robotics Localization is the process by which a mobile robot determines its position and orientation within a given environment. It is a fundamental component for autonomous navigation, as accurate localization enables the robot to understand where it is relative to its surroundings and, ultimately, reach its target destinations effectively. In mobile robotics, localization typically involves comparing sensor data from the robot—such as LiDAR scans, camera images, or GPS coordinates—with a known map or model of the environment. To enhance accuracy and mitigate uncertainty, sensor fusion techniques integrate data from multiple sensors. Common methods for estimating the robot’s pose include probabilistic localization, Particle filters, Kalman filters, Graph based SLAM and Visual Odometry which address sensor noise and adapt to changes in the environment. Particle filter or Adaptive Monte Carlo Localization (AMCL) and Kalman Filters are among the most widely used methods for estimating a robot’s pose. 2.2.4 Theoretical Framework for Robotic Motion In robotics, path and trajectory are fundamental concepts that govern a robot’s ability to navigate efficiently within its environment. A path refers to the route a robot follows, specifying positions in space without accounting for the time required to execute the motion. In contrast, a trajectory incorporates temporal aspects, detailing not only the robot’s path but also its velocity, acceleration, and jerk at each moment in time. These distinctions are critical for enabling smooth and precise motion, particularly in high-speed applications where minimizing actuator stress and mechanical vibrations is essential. Therefore, effective planning, whether for a path or a trajectory, is essential for ensuring the success of mobile robotics. a. Path planning: Path planning is the process of determining an optimal or feasible route for a robot to travel from a starting point to a target destination. This process involves generating a collision-free path by considering static obstacles in the environment while optimizing specific criteria, such as minimizing travel time, distance, or energy consumption. The outcome of path planning is typically a sequence of waypo- ints or a continuous path that guides the robot’s movement. However, traditional path planning methods often lack adaptability to dynamic changes in the environment. Prominent path planning algorithms can be broadly categorized into graph-based methods and sampling-based methods. Graph-based algorithms, such as A* and Dijk- 14 2. Theoretical Foundation stra’s algorithm, operate on grid or graph representations of the environment and are renowned for their efficiency and optimality in structured spaces. In contrast, sampling- based algorithms, including Rapidly-exploring Random Trees (RRT) and Probabilistic Roadmaps (PRM), are well-suited for high-dimensional or complex environments, where they construct feasible paths by randomly sampling potential routes. Additionally, hybrid approaches and optimization-based planners, such as Model Predictive Con- trol (MPC), have been developed to address the limitations of traditional methods. These approaches combine the strengths of graph-based and sampling-based algorithms, enabling robust and adaptable path planning in dynamic and uncertain environments. Such advancements ensure more reliable navigation for robots operating in real-world scenarios. b. Trajectory planning: Trajectory planning builds upon the generated geomet- ric path by incorporating time-based information, thereby defining critical aspects such as velocity, acceleration, and jerk throughout the motion. Trajectory planning involves huge amount complexity, as the time constraints to reach the via-points significantly impacts both the kinematic and dynamic properties of the motion. For instance, the inertial forces and torques acting on the robot are directly influenced by the accelera- tions along the trajectory. Additionally, mechanical vibrations in the robot’s structure are primarily determined by the jerk (the derivative of acceleration). Therefore, careful consideration of trajectory planning is essential to ensure not only smooth and efficient motion but also the long-term durability and performance of robotic systems. c. Trajectory emulation: trajectory emulation is one of the fundamental concept in robotic motion, aimed at ensuring that the ability of the robot to adhere to a pre- planned trajectory, ensuring it stays on course toward its target. This involves achieving a robotic motion that adheres to the poses, velocities, and accelerations as defined by the reference trajectory. To achieve this control mechanisms, often employing feedback and feedforward strategies, are utilized to correct any deviations between the robot’s actual motion and the desired trajectory. In mobile robotics, trajectory emulation must account for the robot’s kinematic and dynamic constraints, limitations of the robotic performance such as acceleration, velocity etc. Advanced trajectory emulation methods address these challenges by integrating robust control algorithms, including Proportional Integral Differential (PID) control, MPC, and Sliding Mode Control (SMC). These algorithms calculate the control inputs required to minimize errors in position and orientation while ensuring smooth and stable motion. The effectiveness of trajectory emulation is further influenced by the quality of localization and sensor data, which provide real-time information about the robot’s pose. Overall, trajectory emulation serves as a critical component in enabling mobile robots to perform complex tasks ensuring reliability and efficiency in diverse applications. d. Navigation: Navigation is another fundamental concept in mobile robot, which is used enable the robot to autonomously reach a specified destination follow- ing a planned path while effectively avoiding obstacles. This process involves several interrelated components, including localization, path planning, and obstacle avoidance. 15 2. Theoretical Foundation Together, these elements allow the robot to perceive and interpret its environment, de- termine a safe and efficient route, and dynamically adapt its movements in response to changes within its surroundings. Several algorithms have been proposed for mobile robot navigation, some of the prominent once being the Vector Field Histogram (VFH), Dynamic Window Approach (DWA), Elastic Band, and Timed-Elastic Band (TEB). The following section provides a brief overview of the Elastic Band algorithm, which is utilized in this project. d.1. Elastic Band Algorithm: The Elastic Band algorithm is a mobile robot navigation method that dynamically adjusts a robot’s trajectory based on its immediate surroundings. Starting with a global path generated by a planner, the Elastic Band algorithm treats this path as a sequence of ‘bands’ or links that connect the robot’s position to the goal. These bands are flexible and act like elastic links, allowing the path to stretch, contract, or shift in response to detected obstacles. As the robot moves along the path, the Elastic Band algorithm continuously reshapes the trajectory to avoid obstacles while preserving smoothness in movement. The image below (Figure 2.4) provides a visual representation of the working principle of the Elastic Band algorithm. Figure 2.4.: Elastic Band algorithm illustration The TEB algorithm is an extension on the Elastic Band approach by incorporating time as an additional optimization dimension. This temporal component allows TEB to optimize not only the path’s spatial smoothness and obstacle clearance but also the timing of the robot’s movements, controlling its speed along the path. 2.3 Robot Operating System ROS The Robot Operating System [21] is a widely recognized open-source middleware designed to facilitate the development and control of robotic systems. ROS is widely adopted by the robotics community, offering standardized tools, libraries, and conven- tions to support the development and integration of complex robotic applications such as navigation, manipulation, and perception. 16 2. Theoretical Foundation ROS operates on a decentralized, peer-to-peer network of ‘nodes’ that communic- ate with each other through a publisher-subscriber messaging mechanism. Nodes are executable processes that perform specific functions, such as sensor data acquisition or motion planning. Communication between nodes occurs via message channels called top- ics that can be published to or subscribed to by nodes needing or providing data. This flexible and modular communication structure supports the development of complex ro- botic behaviors, allowing each component of the robot to be independently developed, tested, and reused. Gazebo, a simulation environment that integrates with ROS, en- ables users to simulate robotic hardware and test algorithms in virtual environments before deploying them on physical robots. ROS also provides it’s own visualization tool, RViz. It allows for real-time monitoring and debugging of sensor data, robot states, and environment mapping, providing insights crucial for optimizing robotic operations. Open Robotics’ the corporation that maintains ROS has introduced new updated versions of ROS namely ROS 2. While ROS 1 still remains widely used, its monolithic design presents limitations in terms of scalability, security, and real-time capabilities. These challenges are addressed in the ROS version 2. While ROS2 is an effective tool compared to ROS1 [22], this project is still based on ROS1 due to certain computer limitations. 17 Chapter 3 Literature Review This chapter presents an overview of the state of the art in mobile robotics, spe- cifically the robotic subsystems and control architectures essential for the accurate and responsive emulation of complex trajectories, with a particular emphasis on their applic- ability within indoor environments. Central to this review is the evaluation of localiza- tion techniques that prioritize high precision and low latency, both of which are critical for effective indoor operation. The review also analyzes existing trajectory tracking and emulation methodologies, assessing their capability to enhance the precision, stability, and overall performance of robotic systems when emulating reference trajectories. The overarching aim is to establish a solid foundation for the development of robotic sys- tems that can effectively address current challenges and opportunities for innovation in trajectory emulation. 3.1 Systematic Literature Analysis To ensure a thorough and methodologically sound foundation for the study, a struc- tured literature review was conducted following clearly defined inclusion and exclusion criteria. The inclusion criteria for this literature review emphasize peer-reviewed journal articles, conference proceedings, and high-quality grey literature published within the past 15 to 20 years, ensuring alignment with contemporary research trends and the ob- jectives of the current study. Selected works must explicitly address at least one of the following key areas: mobile or holonomic robotic systems, robotic system architecture and design, localization techniques, or trajectory tracking, trajectory emulation and path tracking. Studies falling outside these domains, lacking peer review, or not directly rel- evant to the research aims—are excluded. The literature search will be conducted using major academic databases, including IEEE Xplore, ACM Digital Library, SpringerLink, Scopus, and Google Scholar. Boolean search operators will be employed to construct targeted queries, facilitating a precise and systematic identification of relevant sources. Articles were further assessed based on the relevance of their titles, abstracts, and full texts to ensure the inclusion of studies that met the predefined criteria. This rigorous process aimed to identify high-quality and pertinent research for addressing the study’s objectives. This methodology ensures a rigorous, transparent, and comprehensive review process, forming a robust foundation for addressing the study’s research questions. 19 3. Literature Review 3.2 Localization Systems Localization plays a critical role in mobile robotics, as it enables the accurate estimation of a robot’s position and orientation within its environment. Precise local- ization is fundamental for allowing robots to follow planned trajectories and perform tasks with high accuracy—especially in applications such as trajectory emulation, where low-latency positional updates are essential. Achieving the desired performance in such systems requires the integration of localization algorithms and techniques that are spe- cifically suited to the application. This section presents a detailed overview of prominent localization methods in mobile robotics, with an emphasis on those most applicable to the performance demands and constraints of the current study. One of the simplest and most widely used localization methods is the probabilistic approach, which accounts for the inherent uncertainty in a robot’s observations by ex- plicitly representing uncertainty in its decision-making process. Probabilistic robotics has enabled the development of systems with unprecedented levels of autonomy and robustness. A prominent example of this approach is Monte Carlo Localization (MCL) [23], which models probability density as a discrete distribution of weighted position hypotheses and employs a recursive Bayesian Particle Filter to refine position estimates. Over the years, numerous advancements and extensions of the MCL method have been proposed to enhance its performance. Such as the Adaptive MCL [24], Fast-MCL [25], and Vision-Based MCL [26], among others. One of the main limitation in the conventional localization systems is that when a mobile robot begins to localize itself in an unknown environment, it lacks prior know- ledge of its surroundings. To perform tasks that rely on location information, the robot can utilize an environmental map. This map can incorporate landmarks or markers, enabling the robot to determine its relative position within the environment. A similar localization system designed for indoor navigation is detailed in [27] and [28]. This sys- tem leverages a robust design that uses artificial visual landmarks / marker recognition. The markers consist of unique patterns, typically black-and-white concentric circles or square patterns, which ensure consistent identification under varying lighting conditions. The recognized markers enable the robot to determine its position and alignment by ana- lyzing the geometric relationships between the landmarks in the image and the robot’s surroundings. Ceiling-mounted markers have proven particularly effective in reducing obstructions compared to other configurations, enhancing the system’s reliability. How- ever, further advancements are needed to improve camera versatility and optimize the placement of markers. Enhancing the speed and accuracy of marker recognition is also critical to enable the system’s application in mobile robotics scenarios requiring real-time processing. Triangulation is another prominent approach to robot localization, relying on the recognition and interpretation of features or objects within the environment from mul- tiple sources. By processing this data, the robot determines its position relative to its surroundings. Triangulation methods are typically categorized based on the type of 20 3. Literature Review landmarks or beacons utilized, and they can be classified into four primary approaches: active beacons, artificial landmarks, natural landmarks, and environmental models. Act- ive beacons are specific landmarks installed at known locations within the environment that emit signals such as ultrasonic, infrared, or RF waves [29]. These signals are detec- ted by the robot, enabling it to determine its absolute position by analyzing the direction and distance of the received signals. Artificial landmarks are specially designed objects placed at predefined positions in the environment to serve as distinct reference points for localization [30]. Natural landmarks utilize inherent features of the environment [31], such as walls, edges, or other naturally occurring characteristics, that can be de- tected and used by the robot’s sensors for localization. Environmental models on the other hand rely on prior knowledge of the environment, correlating sensor observations with pre-existing data, such as maps, to determine the robot’s position. The choice of triangulation method depends on the operational context of the robot, environmental constraints, and the sensing technologies available. In general, triangulation-based loc- alization methods provide low-latency position estimates while requiring minimal active onboard components, thereby reducing the likelihood of interference with the localiza- tion process. These characteristics make triangulation a suitable and effective solution for the requirements of the present study. 3.3 Tracking and Emulation Algorithms Path tracking is a crucial technique in robot motion control, which ensures that a robot accurately follows a predefined path. An extension of this technique is the tra- jectory tracking algorithm; This technique ensures that the agent follows the predefined trajectory, while also meeting specific trajectory constraints such as poses, velocities, and accelerations, thereby guaranteeing that the agent follows both the desired path and its associated dynamic motion constraints. This time-sensitive synchronization is partic- ularly important in scenarios like collaborative robotics, where multiple systems must operate seamlessly, or in industrial robots employed in logistics, where precise timing directly impacts reliability, efficiency and productivity. Trajectory tracking in wheeled mobile robots has received considerable attention in the scientific community due to its broad applicability and practical relevance. Although differential drive configura- tion has been the most extensively studied among mobile robot kinematic models, other configurations, such as omnidirectional and tricycle robots, have also been explored to a limited extent. This review focuses specifically on trajectory tracking methods and further trajectory emulation algorithms designed for omnidirectional robots. One of the earliest studies on path tracking for omnidirectional robots was conduc- ted by Betourné et al. [32], who introduced a dynamic model and an output feedback control law tailored to the unique characteristics of omnidirectional robots. Addition- ally, Sira-Ramirez et al. [33] developed an output linear feedback mechanism utilizing a Generalized Proportional-Integral (GPI) observer to estimate and mitigate unknown uncertainties effectively. Their work addressed practical challenges, including wheel slip and coordination imperfections, thus establishing a robust framework for enhancing 21 3. Literature Review tracking performance under real-world conditions. Building on this foundational work, subsequent studies have investigated various strategies for path tracking and stabiliza- tion of omnidirectional robots, with particular attention to addressing dynamic effects, parameter variations, and uncertainties. These advancements have significantly contrib- uted to the development of reliable and efficient tracking solutions for omnidirectional mobile robots in diverse application scenarios. There has been significant research in the field of path tracking for omnidirectional robots focusing on various control techniques, demonstrating significant improvements in accuracy and efficiency. For example, Cong et al. [34] proposed a method for path planning and following based on Uniform Cubic B-spline curves, which reduced compu- tational complexity and provided smooth, continuous paths suitable for omnidirectional mobile robots. Their approach notably addresses both offline and online planning scen- arios, ensuring flexibility and real-time adaptability in dynamic environments. Further, Kanjanawanishkul et al. [35] explored the application of MPC for path following in omnidirectional robots, highlighting its advantages in handling system constraints and optimizing path progression rates. Their study emphasized the MPC’s capability to manage high-speed movements safely, making it especially useful in highly dynamic situations such as robotic soccer competitions. Berntorp et al. [36] introduced a novel method employing convex optimization to achieve time-optimal path tracking with integrated obstacle avoidance for pseudo- omnidirectional mobile robots. This technique allowed for the real-time regeneration of optimal trajectories, demonstrating robustness against uncertainties and disturbances commonly encountered in practical scenarios. Their results also underscored the feas- ibility of implementing such advanced control methods in real-time applications. Fur- thermore, Vázquez et al. [37] developed a computed-torque control strategy, adapting techniques traditionally used in robotic manipulators to omnidirectional robots. Their approach demonstrated promising results in path tracking performance by leveraging the dynamic model of the robot, ensuring both stability and precise error convergence. Through these studies, a comprehensive understanding of the research area is provided, facilitating the development of the current study by highlighting ongoing challenges related to real-time computation, path complexity, and system uncertainties. Moreover, considerable attention has also been given to trajectory tracking al- gorithms in recent research. Huang et al. [38] proposed an adaptive backstepping control method designed to enhance trajectory tracking performance, particularly under challenging operational conditions. Similarly, Xu et al. [39] developed a hybrid control approach that integrates robust neural networks with sliding mode control techniques to effectively manage uncertainties and significantly improve tracking accuracy. While these studies offer robust methods for trajectory tracking, they largely overlook higher- level considerations such as issues caused by trajectory smoothing and the incorporation of standardized middleware, which could simplify implementation and integration. To address these limitations, further literature must be reviewed to explore solutions that consider these aspects comprehensively. 22 3. Literature Review To address the challenge discussed above, that is in accurately following traject- ories with complex paths. This limitation often arises from the constraints of existing algorithms, which frequently rely on trajectory smoothing techniques. These techniques simplify the original trajectory by creating a smoothened version that is easier to fol- low, as highlighted by Amarasiri et al. [40] and J. Wang et al. [41]. While trajectory smoothing is effective in certain scenarios, it can lead to deviations from the original path, compromising the fidelity of emulating reference trajectory. Although this study will incorporate trajectory smoothing to some extent, it aims to address the limitations associated with current algorithms that rely heavily on smoothing. By focusing on min- imizing the deviation from the reference trajectory, this research seeks to enhance the accuracy and precision of trajectory emulating, particularly for complex paths. In efforts to leverage standardized middleware, Besseghieur et al. [42] proposed a ROS-based framework for trajectory tracking in nonholonomic robots. This framework utilizes Lyapunov-based control laws for trajectory tracking in conjunction with AMCL for accurate robot positioning. Experimental results demonstrated the framework’s reli- ability, with minor errors attributed to localization limitations. More recently, Santiago et al. [43] introduced a navigation system for the TurtleBot3 Burger, developed on ROS2. This system integrates trajectory tracking using a proportional controller with obstacle avoidance based on the Artificial Potential Field (APF) algorithm, demonstrat- ing an effective approach to autonomous navigation and showcasing the potential for middleware-based implementations in trajectory tracking. Collectively, these studies contribute to a deeper understanding of the research landscape, underscoring persistent challenges such as real-time computation demands, path complexity management, and the mitigation of system uncertainties, thereby informing and supporting the advance- ment of the current study. 23 Chapter 4 Methodology This section presents the methodological framework adopted to achieve the core objectives and address the research questions of this study. It offers a detailed explana- tion of the research design, implementation strategies, data acquisition techniques, and evaluation procedures. The rationale behind the selected methodology is thoroughly dis- cussed, highlighting its suitability for the development and assessment of the proposed trajectory emulation system. 4.1 Research Approach The development process is structured into three primary phases: component selec- tion, system implementation, and performance evaluation. Each phase is formulated to ensure a methodical progression toward accurate trajectory emulation—beginning with the careful selection of appropriate system components and ending in the comprehensive validation of the developed system’s performance under real-world conditions. The initial phase centers on the selection of appropriate hardware, software, and simulation platforms that are critical for the development of the robotic system. This selection is driven by key criteria such as computational efficiency, compatibility for real-time processing. Essential components identified in this phase include the robotic platform, computational platform, localization system, and collision avoidance mechan- isms—all of which are vital for achieving high-fidelity and safe trajectory emulation. Fur- thermore, a dedicated simulation environment is employed to enable preliminary testing and validation under controlled conditions. This simulation-based approach facilitates the systematic evaluation and optimization of system components prior to physical de- ployment, thereby minimizing the likelihood of performance issues or integration failures in real-world scenarios. The second phase is dedicated to the implementation and development of the ro- botic system and the associated algorithms required for trajectory emulation. It begins with the selection of an appropriate computing platform, the integration of localiza- tion and collision avoidance modules, and the configuration of a simulation environment that accurately reflects real-world operational constraints. The initial implementation leverages existing tools within the Robot Operating System (ROS) framework to achieve 25 4. Methodology trajectory emulation. The performance of these tools is systematically analyzed, and any shortcomings in terms of accuracy and stability are identified. To address these limita- tions, custom algorithm is tailored based on the findings from the initial implementation and also the system’s specific requirements. These refined methods are subsequently evaluated for their effectiveness in accurately reproducing complex, VR user-generated trajectories. This process is continued iteratively until satisfactory results are achieved. Prior to real-world deployment, the enhanced algorithms undergo thorough validation in the simulated environment to ensure a robust and reliable transition to practical application. The final phase focuses on evaluating the performance of the implemented solution in the real world scenario. A rigorous evaluation framework is established, employing both graphical and statistical analysis methods to assess the system’s overall effective- ness in replicating the reference trajectory. This assessment ensures that the developed system is capable of reproducing trajectories with minimal deviation from the references. By following this structured methodology, the study establishes a reliable framework for developing and validating the trajectory emulation algorithm, ensuring that the robotic system can effectively transition from simulation-based development to real-world im- plementation. 4.2 Data Collection Methods Reference trajectory data collection: The reference trajectory data that is needed to be emulated in this study is collected from two primary sources: 1. simulated user movement in Unity and 2. real-world motion capture of a VR user. A synthetic trajectory was initially generated in Unity using a built-in trajectory generation algorithm. The constraints applied to this algorithm were derived from a review of existing literature on human locomotion, ensuring that the resulting motion patterns exhibited realistic and biologically inspired characteristics. A simulated dummy robot was then programmed to follow the generated trajectories, during which its odo- metry data—comprising position, velocity, and orientation—was recorded at predefined time intervals. This odometry data was stored in a ROS bag file, serving as the primary reference source for the trajectory to be emulated. The resulting dataset provides a reliable reference for assessing the performance of the trajectory emulation algorithm within a controlled simulation environment. To validate the algorithm in a real-world scenario, a second reference trajectory dataset will be collected from actual VR users. The user is equipped with motion capture markers placed on various parts of the body, which are tracked and recorded using the Qualisys motion capture (mocap) system. These recorded trajectories will serve as the primary reference for the physical robot to emulate, enabling a rigorous assessment of the algorithm’s ability to replicate realistic VR user’s movement patterns in a physical environment. Emulated trajectory data collection: To ensure accurate spatial and temporal 26 4. Methodology measurement of the robot’s emulated trajectory for comparison with the reference data. Experiments involving physical robot with the trajectory emulation framework will be conducted using the Qualisys motion capture localization system. This will be employed to precisely track and record the robot’s movements. This setup enables a thorough evaluation of the robot’s compliance with both the spatial and temporal characteristics of the reference trajectories, thereby facilitating a accurate assessment of the emulation algorithm’s performance. 4.3 Evaluation and Testing The evaluation of typical navigation systems are commonly performed by measuring and analyzing key performance indicators such as the length of the planned path, the time required for the robot to complete its trajectory, and, in some cases, qualitative metrics like collision risk and trajectory smoothness [44]. However, studies specifically focused on robotic systems designed to follow a predefined reference trajectory such as the current study is relatively scarce, resulting in a lack of standardized evaluation protocols for systematically assessing these control architectures [45]. To experimentally evaluate and validate the proposed approach, the evaluation framework outlined by the Performance Metrics for Intelligent Systems Workshop – 2007 [46] is adopted as a reference. This ensures that the assessment methodology adheres to established standards, thereby enabling a rigorous and objective analysis of the system’s effectiveness in achieving accurate trajectory emulation. 4.3.1 Experimental Setup and Protocols The initial evaluation of the performance of the developed emulation algorithm is performed in a simulated environment using ROS and Gazebo, where the recorded trajectories are used to theoretically assess the accuracy and effectiveness of the imple- mented trajectory emulation approach. Following this, the evaluation is extended to the physical robotic system, enabling a comprehensive analysis of its performance under real-world conditions. The combination of simulated and real-world evaluations ensures a thorough validation of the system’s ability to accurately replicate user-generated ref- erence trajectories. The protocols and methodologies employed for assessing both sim- ulated and physical implementations are outlined in detail in the subsequent sections. Evaluation of the Trajectory Emulation Algorithm: All the emulation al- gorithms, along with the enhanced trajectory emulation algorithm, is evaluated to assess its effectiveness in accurately following the reference trajectory. Protocol: The trajectory toward the goal is assessed in both spatial and temporal dimensions to ensure precise trajectory emulation. An optimal trajectory is defined as one in which the robot closely follows the reference trajectory with minimal deviations in position and orientation at each waypoint while also minimizing the time difference between consecutive waypoints. In accordance with the evaluation protocols outlined in 27 4. Methodology [46], this study adopts similar methodologies as the ones discussed in the research, to ensure a systematic assessment of the developed system. Mean Distance to the Goal (Mgd) is the primary metric used for evaluating tra- jectory emulation accuracy, which quantifies the robot’s ability to accurately follow the reference path. A crucial factor in determining the effectiveness of a trajectory tracking or emulation system is its capability to adhere to a path that leads to a predefined goal (i.e., waypoints). To evaluate the precision of trajectory emulation, the mean deviation between the robot’s actual emulated path and the reference path along the trajectory is analyzed. The Mean Distance to the Goal (Mgd) is computed by integrating the squared proximity ln to the waypoints over the entire trajectory length and normalizing it by the total number of waypoints N , as shown in the following equations: ln = min ∀n (xr n − xn)2 + (yr n − yn)2 (4.1) Mgd = l 0 l2 n ds N (4.2) where (xr n, yr n) represents the reference trajectory coordinates, (xn, yn) represents the robot’s actual position at each waypoint n, and ln is the shortest distance from the robot’s position to the reference trajectory. A lower Mgd value indicates that the robot closely follows the reference path, signi- fying high accuracy in trajectory emulation in terms of positional adherence. Conversely, a higher Mgd value suggests significant deviations from the reference path, which may result from suboptimal robot control, or external disturbances affecting the robot’s move- ment. By systematically analyzing the Mgd, the overall performance of the trajectory emulation algorithm can be effectively assessed. To incorporate the robot’s orientation (heading) into the evaluation of trajectory adherence, the methodology used for position tracking is extended to include angular alignment. The angular deviation between the robot’s actual heading and the desired orientation along the reference path is measured to ensure that the robot not only follows the spatial path but also maintains proper alignment with the intended trajectory. To quantify this aspect, the Mean Heading Deviation (Mhd) is introduced as a complementary metric. This metric evaluates the accuracy of the robot’s orientation along the reference trajectory by computing the squared angular deviation θn between the robot’s actual heading and the reference heading at each waypoint, integrated across the entire trajectory and normalized by the total number of waypoints N . This allows for a comprehensive assessment of the robot’s adherence to the trajectory in both spatial and orientation. 28 4. Methodology θn = min (∀n |θr n − θn|) (4.3) Mhd = l 0 θ2 n ds N (4.4) In research on trajectory emulation algorithm performance, statistical analysis is rarely utilized to assess temporal adherence. Instead, evaluations are commonly con- ducted using time-domain plots that compare tracking errors, providing a visual repres- entation of the system’s ability to maintain synchronization with a fixed-time constraint. In this study, a similar approach is employed, where plots are used to quantify temporal deviations across all waypoints. Additionally, a simplified statistical method is incor- porated, comparing the total time required to complete the trajectory in the reference data against the recorded execution data. This dual approach offers a comprehensive as- sessment of the robot’s synchronization with the reference trajectory, providing valuable insights into its ability to adhere to predefined constraints. 29 Chapter 5 Implementation 5.1 Requirement Elicitation The implementation phase of this study focuses on the seamless integration of hard- ware and software components to enable accurate trajectory emulation. The process begins with the selection of a robotic platform capable of executing complex trajectories with high precision. A high-performance computational unit is integrated to handle the extensive processing demands associated with real-time motion control, trajectory emu- lation, and sensor data processing. This computing system selection is a crucial aspect as it is responsible for executing ROS-based computations, processing data from local- ization and proximity sensors, and managing various computational tasks efficiently to ensure uninterrupted operation. A robust real-time localization system is to be imple- mented to provide precise position estimates, which are critical for smooth and respons- ive motion control, particularly for an omnidirectional mobile platform. To enhance safety and environmental awareness, obstacle detection sensors should also be incor- porated to facilitate dynamic collision avoidance. These sensors continuously provide critical information about its environment, allowing the system to adjust its trajectory in response to environmental changes. Additionally, a dedicated microcontroller is de- ployed to preprocess sensor data from the obstacle detection system and transmit them to the central computing unit via a serial communication interface, ensuring efficient data flow between system components. Equally important to the implementation phase is the integration of advanced motion control algorithms. A global planner is employed to generate an optimal trajectory that closely follows the reference path, while a local planner is implemented, either adapted from existing solutions or specifically designed for this study, to refine the execution of the real-time trajectory. These planners operate in unison to enable fast, adaptive path planning, ensuring that the robot accurately follows the reference trajectory while maintaining stability and responsiveness. 31 5. Implementation 5.2 System Selection and Design Rationale Given the outlined system requirements, selecting the appropriate components is essential to ensure that each element meets the system’s performance demands while operating accurately and reliably. Component selection is carried out with a strong em- phasis on compatibility and efficiency, enabling seamless integration within the robotic system and ensuring optimal functionality. 5.2.1 Selection of the Robotic Platform To identify a robotic platform capable of accurately traversing and emulating com- plex trajectories, two options were evaluated: the Scout Mini by AgileX Robotics [47] and the Mark 1 by Hexmove [48]. Both platforms feature omnidirectional mobility and robust designs, making them well-suited for precise trajectory emulation. Following a detailed evaluation, the AgileX Scout Mini, shown in Fig. 5.1, was selected. This choice was based on its superior performance, well-established brand, seamless integ- ration with the ROS framework, and extensive documentation supported by an active user community. The comparative performance analysis of both robots, summarized in Table 5.1, was instrumental in determining the most suitable platform for this study. Figure 5.1.: Scout Mini by AgileX 32 5. Implementation Table 5.1.: Comparison of Key Performance Parameters of the robots Parameter Scout Mini Hexmove Mark 1 Payload Capacity 20 kg (Mecanum) 25 kg Ground Clearance 115 mm 132 mm Maximum Speed 2.7 m/s (Standard Wheel) / 0.8 m/s (Mecanum Wheel) 2 m/s Turning Radius 0 m (In-situ Rotation, required for the current application) 415 mm Maximum Slope <30° with Load 40° (Unloaded) / 15° (Loaded) Suspension Independent Suspension with Rocker Arm Double Wishbone Communication Interface CAN, 232 Serial Port CAN Battery Capacity 24 V / 15 Ah 24 V / 14.4 Ah Charging Time 2 h (Fast charging) 6 h 5.2.2 Selection of Hardware Components Core Computational System: A key aspect of the component selection process is identifying a high-performance and reliable computing platform capable of efficiently managing and processing data from multiple sources while ensuring seamless communic- ation with the robot via a CAN bus interface. This enables timely command transmission and cohesive operation across all system components. To meet these requirements, sev- eral computing platforms were evaluated, including the Raspberry Pi 5 [49], Intel NUC 11 Pro [50]and NVIDIA Jetson series computers [51]. After a thorough assessment, the NVIDIA Jetson Nano, specifically the Jetson Nano 2GB Developer Kit by Seeed Studio (reComputer J101) [52], was selected. This decision was based on its optimal balance of computational power, energy efficiency, cost-effectiveness, and suitability for embed- ded robotics applications. A comparative analysis of the evaluated computing platforms is summarized in Table 5.2. Additionally, Figure 5.2 provides a pin-out diagram of the Seeed Studio reComputer J101, offering a detailed overview of its connectivity and functionality. 33 5. Implementation Table 5.2.: Comparison of Computing Platforms for Robotics Applications Platform Processor GPU RAM Power Consump- tion Raspberry Pi 4 Quad-core Cortex- A72 Broadcom VideoCore VI Up to 8GB LPDDR4 5W Intel NUC 11 Pro Intel Core i5/i7 Intel Iris Xe Graphics Up to 64GB DDR4 15-30W NVIDIA Jetson Nano Quad-core ARM Cortex- A57 NVIDIA Maxwell GPU (128-core, ideal for parallel processing) 2GB/4GB LPDDR4 (optimized for multi- tasking) 5-10W (energy efficient for robotics) Figure 5.2.: Seeed Studio’s reComputer J101 - Jetson Nano Developer Kit Obstacle Avoidance System: As previously outlined, the VR user’s trajectory patterns are generated either within a simulated environment, such as Unity, or derived from real-world trajectory data recorded from a human VR user. These trajectories are inherently designed to be free of obstacles. As a result, the primary function of the obstacle detection sensors in this system is to serve as a final safety layer, preventing collisions that may arise due to navigation errors or deviations from the predefined 34 5. Implementation path. Additionally, these sensors contribute to the sensor layer required for the proper functioning of the ROS navigation stack. Given this limited yet essential role, the obstacle detection sensors do not require high levels of sophistication. Instead, a basic yet reliable sensor capable of detecting obstacles is sufficient. For this purpose, the HC- SR04 ultrasonic sensor was selected due to its simple design, dependable performance, and adequate range and accuracy for collision prevention within the system. This choice provides an added layer of safety without introducing unnecessary complexity or cost. To enable real-time data acquisition and seamless communication with the main computing platform, the Arduino Uno Rev3 [53] microcontroller was chosen to process data from the HC-SR04 sensor. This combination ensures efficient and reliable operation while meeting the system’s safety and functional requirements. Figure 5.3.: Ardunio Uno Rev3 (left) and Ultrasonic sensor-HC_SR04 (Right) This straightforward yet effective setup, integrating the HC-SR04 with the Arduino Uno Rev3 5.3, serves as a reliable safety mechanism within the system. It provides an additional layer of collision avoidance while also contributing to the sensor layer of the navigation stack. This design ensures simplicity and efficiency in data processing while maintaining the overall robustness of the system. 5.2.3 Selection of the Software Ecosystem for Robot Control ROS [21] was chosen as the middleware to develop and control the robotic sys- tem due to its extensive community support, comprehensive open source libraries, and a rich ecosystem of tools, algorithms, and frameworks tailored for robotic applications. These features not only facilitate an efficient development process but also provide ro- bust troubleshooting capabilities, making ROS an ideal platform for developing complex robotic systems. In addition, its modular architecture enables flexible integration of es- sential components, including perception, navigation, and control modules, which are crucial to accurately emulate user-generated VR trajectories. Furthermore, the broad compatibility of ROS with various sensors, actuators, and middleware ensures seam- less hardware integration, enhancing overall adaptability and functionality of the sys- tem. ROS’s real-time capabilities, coupled with robust support for simulation tools like Gazebo [?] and RViz, make it an excellent platform for development, testing, and re- finement of the algorithms required to build a responsive robot control system. These 35 5. Implementation factors make ROS the optimal choice for this project. Following this, an appropriate ROS version and distribution needed to be selec- ted that aligns with the project requirements and the chosen hardware configuration. ROS1 was selected, as the chosen computing platform, that is, Jetson Nano, runs on Ubuntu 18.04 by default. Although ROS2 compatibility on Jetson Nano is possible with distributions like ROS2 Crystal or ROS2 Galactic if the OS were upgraded to Ubuntu 20.04, these ROS2 distributions have reached their end of life and lack sufficient support, posing potential challenges for development and troubleshooting. Consequently, ROS1 Melodic was chosen as it is fully compatible with Ubuntu 18.04, benefits from extensive community support, and provides numerous ready-to- use resources and repositories, including those specifically developed for the Scout Mini robot. This decision ensures access to a stable and well-supported development environ- ment, facilitating efficient implementation and testing of the system. To enable effective development and simulation-based testing of algorithms, a Docker container was set up with Ubuntu 18.04 and ROS1 Melodic. This containerized environment provides a con- sistent and isolated platform for running ROS, simplifying the development process and enabling efficient testing and refinement of algorithms directly on the host system. 5.2.4 Selection of the Localization System To achieve real-time, high-precision localization—an essential prerequisite for ac- curately tracking trajectories and replicating a VR user’s locomotion patterns—a robust and reliable localization system is required. This system must continuously deliver precise positional data to ensure smooth and responsive navigation while minimizing latency. Conventional localization methods often fall short when applied to robotic plat- forms with complex kinematic models, such as omnidirectional robots, or when tracking agents executing intricate and highly dynamic trajectories in real time. Therefore, a more advanced localization strategy is necessary to enhance accuracy and reliability, ensuring precise trajectory emulation under dynamic conditions. A marker-based localization approach [27], [28] was chosen as the optimal solution due to its simplicity, cost-effectiveness, and high accuracy. This method utilizes fiducial markers mounted on the ceiling to provide unobstructed views and stable reference points for localization. A tracking camera installed on the robot identifies these unique markers and maps them to their corresponding positions, enabling precise and uninterrupted self-localization within the environment. The overhead marker configuration enhances system robustness by minimizing interference from obstacles and ensuring a clear line of sight, thereby improving the reliability of the localization process. This setup is illustrated in Figure 5.4. Additionally, this system offers flexibility in marker placement, enabling adjustments in density and positioning to suit specific tracking requirements and optimize accuracy across diverse indoor environments. These features make marker- based localization a compact and effective solution for the demands of this application. 36 5. Implementation Figure 5.4.: Marker placement on the ceiling of the room The system utilizes an Intel RealSense tracking camera [54], specifically designed for spatial tracking, providing six Degrees of Freedom (6DoF) to accurately capture both position and orientation data as the robot moves. To complement this, the solution integrates a dedicated computing platform, the Odroid N2+ [55], which processes the localization data received from the RealSense camera via a serial port connection. This configuration ensures efficient and real-time processing of spatial information, enhan- cing the system’s localization capabilities. The complete localization setup has been previously tested and successfully deployed in various AR/VR applications, as well as on the CoboDeck RB-KAIROS holonomic robot [56], demonstrating its reliability and effectiveness in real-world scenarios. 5.2.5 Selection of Navigation Framework for Precise Trajectory Emulation Following the selection of the localization system, an appropriate trajectory emula- tion algorithm must be setup to control that robot to efficiently and precisely replicate the reference trajectories. To achieve this, the ‘move_base’ framework, an inbuilt com- ponent of ROS for mobile robot navigation, is considered. This framework provides a comprehensive suite of algorithms and configurations designed for robust and adaptable navigation. It includes built-in plugins for global path planning, local path planning, and costmap management, enabling seamless integration within the ROS environment while facilitating accurate robot navigation. Going further, for the selection of appropriate navigation plugins, careful consid- eration was given to both global and local path planning to ensure optimal trajectory emulation. For global path planning, the ‘Global_Planner’ plugin was identified as the most suitable choice due to its efficient path generation capabilities. By leveraging the 37 5. Implementation heuristic advantages of the A* algorithm, ‘Global_Planner’ enables fast and reliable path computation, making it particularly well-suited for applications that require rapid, safe navigation decisions with minimal computational overhead. This is especially critical for the dynamic and time-sensitive demands of precise trajectory emulation. The selection of the local planner was guided by several key factors, with compat- ibility with the kinematic model of the holonomic robot being a primary consideration. Ensuring that the selected local planner aligns with the robot’s motion capabilities is es- sential for achieving accurate and efficient trajectory execution. Additionally, the local planner needed to incorporate time constraints into its trajectory planning, ensuring that time was a critical factor in its computations to enable accurate and timely path tracking. Furthermore, the planner required sufficient configurability to precisely adapt the robot’s trajectory to the specific requirements of the project. Lastly, seamless com- patibility with the selected ROS framework and availability within the ROS ecosystem were essential considerations. Based on these criteria, the TEB planner was identified as the optimal choice. The TEB planner offers extensive configurability, full compatibility with holonomic systems, and precise trajectory control, making it highly suitable for the project’s objectives. In combination with the selected global planner, this navigation stack ensures a responsive and adaptive robot navigation system capable of meeting the stringent demands of precise trajectory emulation. Within the navigation stack, while the global planner efficiently computes the shortest and most optimal path from a start pose to a target pose, additional configur- ation is necessary to incorporate intermediate waypoints along the reference trajectory. These waypoints enable the global planner to generate paths that ensure the robot accur- ately follows the intended trajectory. However, since the default global planner in ROS does not fully support this functionality, ‘follow_waypoints’ library—an open-source ROS package—was integrated into the navigation system to address this requirement. The ‘follow_waypoints’ library enhances the global planner by enabling path planning through intermediate waypoints, ensuring precise adherence to the predefined trajectory. This library was selected for its structured, minimalist state machine design, which util- izes the SMACH framework to introduce complex behaviors through well-defined state transitions. Within the ‘follow_waypoints’ library, state transitions occur in a sequential manner: ‘GET_PATH’, ‘FOLLOW_PATH’, and ‘PATH_COMPLETE’, before cycling back to the initial state. This structured approach ensures an organized and reliable operational flow, making the library well-suited for optimal trajectory emulation within the ROS framework. 5.3 Implementation and System Setup With the necessary components selected and the required data collected, the next phase is the implementation of the robotic system, integrating both hardware and soft- ware components to ensure seamless operation. The robotic platform, equipped with an Intel RealSense tracking camera and an HC-SR04 ultrasonic sensor, was configured for real-time localization and obstacle detection. A Jetson Nano computing unit served 38 5. Implementation as the central processing hub, managing ROS-based computations, including traject- ory emulation, sensor data processing, and motion control. The navigation stack, in- corporating the Global_Planner, TEB_local_planner, and follow_waypoints library, was deployed to enable the robot to accurately follow predefined reference trajectories. All system components were configured to operate in synchronization, ensuring precise real-time trajectory emulation. Any necessary refinements and optimizations will be implemented iteratively to address the shortcomings of the selected solutions, continu- ously improving the system until it can reliably emulate collected reference trajectories in real-time. 5.3.1 Design and Structural Enhancements on the Robot To integrate and deploy the physical robotic dummy user within the VR environ- ment, several design and structural modifications were made to align with the project’s requirements. Some of these modifications focused on visually replicating a dummy user on the holonomic robot, a critical aspect for configuring the RB-KAIROS mobile manipulator. This replication ensures that the dummy user serves as a recognizable rep- resentation of the actual VR user, enabling the RB-KAIROS robot to accurately detect the dummy and position its haptic surface accordingly. By achieving precise alignment with the dummy user, the system can be programmed to ensure effective haptic feedback delivery, when it is deployed along with an actual VR user to provide haptic feedback. In addition, structural enhancements were introduced as precautionary measures to protect the physical system from potential damage during unexpected collisions with obstacles. An external skeletal frame was installed around the robot to safeguard its body in the event of impacts. To further mitigate collision effects, cushions were placed at the front of the robot, and impact energy was dissipated using metallic chains at- tached from the frame to the robot’s frontal region. Additionally, the necessary elec- trical modifications were carried out to integrate the various electronic components to ensure seamless functioning. The structural and design modifications described above are illustrated in chronological order in Figure 5.5. Figure 5.5.: Structural design modification phases: Phase 1 (left), Phase 2 (center), and Dummy VR User design (right) 39 5. Implementation 5.3.2 ROS Development Environment Setup The initial system setup involved the installation and configuration of ROS on the host machine, which functioned as the primary development environment. To maintain a consistent and isolated workspace, a Docker container was used on the host machine, with Ubuntu 18.04 as the base operating system. Within this containerized environment, ROS Melodic was installed, ensuring a stable and controlled setup for development and testing while facilitating reproducibility and dependency. To simulate and visualize the robot’s behavior, Gazebo and ROS Visualization (Rviz) were installed and utilized within the Docker container. To enable seamless visu- alization of the simulated robot operation in the ROS environment, a noVNC client was employed, allowing users to interact with the simulation and visualization setup directly through a web browser. Following this, the Scout Mini robot’s ROS repository was installed within the Docker container, providing the necessary ROS packages for the basic configuration and control of the robot. Additionally, Rviz was used to visualize the simulated laboratory environment, offering a detailed and accurate representation of the workspace. To further enhance the simulation capabilities and closely replic- ate real-world physics, Gazebo was selected, and a precise 3D model of the laboratory was developed based on a pre-generated map of the environment. This realistic and interactive simulation environment allowed for comprehensive testing and validation of the robot’s trajectory emulation performance. The final simulation setup is depicted in Figure 5.6. Figure 5.6.: Final simulation setup in Rviz (left) and Gazebo (right) The next step involved preparing the Jetson Nano, which is designated as the primary onboard computer for the robot. This process began by flashing the device with Ubuntu 18.04 to ensure compatibility with the ROS Melodic environment. To ad- dress the Jetson Nano’s limited onboard storage, a 128 GB USB drive was configured to expand its capacity. The operating system was then transferred to this USB drive, providing a dedicated and isolated development space for the project. The packages and code developed on the host machine were organized and maintained using the Git 40 5. Implementation version control tool. A GitLab repository was created and used to track changes, en- suring the development process remained efficient and well-structured. ROS Melodic was installed on the Jetson Nano, and all ROS packages developed on the host machine were transferred to the device via the GitLab repository. This approach ensures that the onboard system accurately mirrored the development environment. 5.3.3 Implementation of Localization System: The marker-based localization system has been extensively tested and successfully deployed in various AR/VR applications, including on CoboDeck’s RB-KAIROS holo- nomic robot, demonstrating its reliability and effectiveness in real-world scenarios. The current implementation onto to the Scout Mini robot followed a structured approach to ensure a stable and efficient setup. To set up the system, an SD card from a pre-configured and operational Odroid dedicated to the RB-KAIROS robot was used to duplicate the operating system image. Subsequently, this image was flashed onto a new Odroid. Following this, the Intel RealSense tracking camera was connected to the Odroid via a serial port, allowing it to access the camera feed for processing. A custom IP address was assigned to the Odroid, this IP along with the port number were documented and used, to enable communication with the Jetson nano via an Ethernet connection. Once this connection was established, the Jetson nano was able to access the localization data streamed from the Odroid and process it using ROS to ensure accurate, real-time localization of the robot. By offloading the localization data processing task to the Odroid, the overall efficiency and performance of the system was maintained. The final configuration of the robot and the complete connection is illustrated in Figure 5.7, depicting the integration of the marker-based localization system into the robot’s operational framework. Figure 5.7.: Tracking camera placement on the robot (left) and localization circuit (right) 41 5. Implementation This setup was simulated within the ROS environment and the robot’s Unified Robotics Description Format (URDF) was also modified to incorporate the localization sensor (Intel RealSense camera). This integration is essential to define the camera’s frame relative to the robot’s base frame, which is further essential to accurately determine the location of the robot with respect to the global map frame during operation. This coordinated setup between hardware and ROS ensures real-time positioning essential for reliable localization. 5.3.4 Implementation of the Navigation Stack An essential component of the ROS navigation stack is the sensor system respons- ible for obstacle detection, which provides critical real-time data to identify obstacles in the environment. These data are processed by costmaps within the ROS navigation stack, which play a vital role in enabling safe and efficient navigation. Costmaps achieve this by generating weighted regions on the map, distinguishing areas that are safe for navigation while avoiding obstacles. In this project, five ultrasonic sensors were strategically positioned on the robot to serve as the obstacle detection system. These sensors were electrically connected to an Arduino Uno through a prototyping shield, simplifying the wiring process and ensuring reliable connections. The Arduino was programmed using the Arduino IDE to process the range data collected from each sensor. The processed data was transmitted to the Jetson Nano via a serial port. The Jetson Nano, running the ROS navigation stack utilized this data to dynamically update the costmaps, enabling the robot to safely navigate through its environment. To ensure accurate interpretation of the sensors’ positions within the ROS frame- work, the robot’s URDF file was updated. This update involved defining frames for each ultrasonic sensor, thus specifying their relative positions with respect to the robot’s base frame. This spatial information is necessary for the navigation stack, to ensure that the sensor data is accurately interpreted in relation to the robot’s orientation and movement. The complete proximity sensor system setup, including the updated URDF config- uration, is illustrated in Figures 5.8 5.9. This configuration enhances the robot’s ability to detect and respond to obstacles during the robot navigation. 42 5. Implementation Figure 5.8.: Electrical connection setup for Arduino and proximity sensor Figure 5.9.: Proximity sensor placement (left) and robot URDF representation (right) At this point, the hardware integration of the robotic system has been successfully completed. The layout, depicting the connections between the various components, is presented in Figure 5.10 below. Figure 5.10.: Hardware setup 43 5. Implementation The configuration of the navigation stack involves configuring several elements that are essential for effective path planning and obstacle avoidance, namely the Global Plan- ner, local planner and the costmaps. As discussed previously Global_planner plugin is used for generating a high-level path from the robot’s current position to its target loc- ation, focusing on an optimal and feasible route TEB Local Planner plugin is chosen as the local_planner, for dynamic and responsive navigation, which allows the robot to follow the global path while adapting to obstacles in real-time, generating smooth, time-optimal trajectories that consider the robot’s kinematic constraints. The config- uration includes local, global, and common costmaps. The global costmap provides a broad overview for route planning, representing static obstacles and boundaries, while the local costmap focuses on the robot’s immediate surroundings for real-time adjust- ments, particularly to avoid dynamic obstacles. Common parameters are shared across both costmaps to ensure consistent behavior and facilitate smooth integration between global and local planning. All the above plugins are configured to ensure efficient and ac- curate navigation, achieving a balance between safe path planning and the fast, dynamic requirements necessary to emulate the trajectory of the VR dummy user. This setup enables the robot to navigate effectively while adapting to real-time changes, ensuring both precision and responsiveness in its movement. Following the configuration of the ROS navigation stack, the navigation system can safely guide the robot within its working environment. Initial navigation setup tests were conducted in the Gazebo simulation environment, with the results visualized using the ROS visualization tool, RViz. 5.3.5 Implementation of the Preliminary Trajectory Emulation Framework To achieve precise emulation of complex trajectories, such as those generated by the simulated VR user in Unity and human VR users, relying solely on the global planner does not guarantee that the robot will follow the reference trajectory. The global planner primarily ensures a safe path from the start position to the goal but lacks the capability to closely track the desired trajectory. To address this limitation, additional functionality is implemented through the ‘follow_waypoints’ library. This library complements the global planner by enabling path generation that closely aligns with the reference trajectory. The ‘follow_waypoints’ node facilitates tracking trajectory by guiding the robot to follow a predefined set of waypoints stored as an array of poses. This method effectively directs the global planner to generate paths that adhere closely to the trajectory defined by the array, ensuring alignment with the desired path. As discussed earlier, the reference trajectory generated by the dummy VR user in Unity was collected as a stream of ‘PoseWithCovarianceStamped’ messages, published by Unity and recorded using a rosbag file. This dataset is then processed and filtered to reduce volume of the messages before being fed into the ‘follow_waypoints’ node. The node uses these filtered ‘PoseWithCovarianceStamped’ messages to define the array of waypoints for the robot to follow, thereby creating a path that closely matches the 44 5. Implementation reference trajectory, although at a slightly lower resolution. The ‘follow_waypoints’ node is implemented as a state machine using the ‘smach’ Python library in ROS. The default state transitions follow the sequence: ‘GET_PATH’, ‘FOLLOW_PATH’, and ‘PATH_COMPLETE’, repeating this cycle for each waypoint. This structured design ensures that the robot transitions smoothly between waypoints and traverses the entire trajectory in an organized and effective manner. To meet the specific requirements of this project, several modifications were made to the stock ‘fol- low_waypoints’ node. By default, the robot is required to reach each waypoint before planning for the next, which leads to frequent replanning by the global planner at each stop. To mitigate this issue, the ‘follow_waypoints’ code was enhanced to incorporate planning ahead for upcoming waypoints. This improvement reduced the need for con- tinuous re-planning, allowing the robot to follow a smoother, uninterrupted trajectory that aligns more closely with the demands of complex path tracking. The above implementation allowed verification of whether the robot accurately traced the reference trajectory. However, to ensure precise trajectory emulation, it is equally important to verify that the robot meets the time constraints of the trajectory. To configure the current setup to address this constraint, an additional reference robot was introduced in the simulation environment to closely replicate the dummy VR user’s trajectory generated in Unity. This reference robot followed the trajectory extracted directly from the unprocessed ROS bag file of the dummy VR user. By replicating the exact ‘PoseWithCovarianceStamped’ messages recorded in Unity, the reference robot adhered strictly to the original trajectory, providing a visual representation of the ideal robot performance. This setup was used exclusively for comparative analysis within the simulation environment on the host machine. The figure 5.11 below shows this setup in Rviz. It enabled a direct visual comparison between the expected performance, repres- ented by the reference robot, and the actual performance of the robot working based on the navigation stack. This approach facilitated a clear and comprehensive assessment of the implemented Trajectory Emulation framework’s accuracy and responsiveness in replicating the reference trajectory, ensuring both spatial and temporal precision. Figure 5.11.: Comparative simulation showing the reference robot (green marker) rep- licating the trajectory and the actual robot executing it 45 5. Implementation 5.4 Analysis of Preliminary Trajectory Emulation Frame- work’s Performance Discrepancies Following the implementation of the preliminary trajectory emulation framework, its performance was initially assessed and configured through visual comparison, as pre- viously described. Once the system was sufficiently configured to achieve an acceptable level of performance, it was further analyzed using the statistical and graphical eval- uation methods outlined in Section 4.3. The results of this analysis are presented in Section 6.1.1. Based on the evaluation and real-world deployment, certain shortcomings were identified. The subsequent section will discuss these observed discrepancies. 5.4.1 Analysis of Emulation Algorithm’s Performance Discrep- ancies Following the configuration of the Trajectory Emulation setup as outlined above, an initial visual comparison was performed to evaluate the performance of the navigation stack by comparing the actual robot’s emulation performance with the reference robot’s performance. The configurations for the whole Emulation framework was iteratively optimized to achieve the best possible alignment with the reference path, both spatially and temporally. Although achieving a path closely resembling the reference trajectory required minimal effort, ensuring temporal accuracy proved more challenging. The optimization and configuration of the complete trajectory emulation setup were performed iteratively until the best possible performance was achieved. Fine-tuning the ‘Global_Planner’, ‘TEB_local_planner’, and ‘follow_waypoints’ proved to be a highly demanding process, as adapting the configuration to meet the requirements of every trajectory posed significant challenges. Despite extensive efforts, achieving fully precise trajectory emulation remained an unresolved issue. A more detailed analysis of these challenges is presented in the results section 6.1.1, with further discussion provided in Chapter 7. These limitations underscored the need for a more advanced algorithm capable of accurately emulating the desired trajectory while maintaining tighter adherence to both temporal and positional accuracy. This realization prompted further exploration into alternative solutions for trajectory emulation, aiming to overcome the shortcomings observed in the current approach. 5.4.2 Analysis of Localization Performance Discrepancies After evaluating and refining the emulation setup in simulation, the system was deployed on the physical robot using the best-performing configuration to assess its performance under real-world conditions. To achieve this, a distributed ROS setup was implemented between the robot’s primary onboard computer and the host machine. While all ROS-based computations were performed on the robot’s onboard computer (Jetson Nano), the host machine was used to visualize and collect the output from various 46 5. Implementation nodes and topics, including RViz. This setup allowed for real-time observation and analysis of the robot’s performance as it attempted to replicate the intended trajectory. During the performance assessment, several discrepancies were identified while using the ‘follow_waypoints’ approach for trajectory emulation. One significant issue was the slight latency in receiving localization data from the marker-based tracking system. This delay caused the robot to overshoot certain waypoints where it was expected to come to a complete stop, necessitating readjustments to its position. These corrections often led to oscillatory movements before the robot could fully stabilize. Moreover, the rapid directional changes inherent in the reference trajectory introduced additional challenges for the localization system. Sudden shifts in direction resulted in abrupt movements by the robot, and the Intel RealSense camera’s IMUs, being sensitive to rapid adjustments, occasionally introduced drift in the localization data. This drift led to temporary inac- curacies in position estimation until the AMCL module updated the robot’s pose. This issue was predominant in complex segments of the trajectory. Additionally, the current localization system was found to be computationally demanding. While it operated as intended, prolonged use led to overheating of the Jetson Nano, underscoring the need for a more computationally efficient solution. These challenges highlighted the limitations of the existing localization setup, em- phasizing the need for a more robust and efficient approach. A system capable of effect- ively mitigating latency, minimizing oscillations, and reliably handling rapid directional changes without introducing drift is essential to enhance accuracy and stability in loc- alizing agents in real-world conditions. 5.5 Implementation of Refined System This section presents the implementation of the refined system, detailing the im- provements made to overcome the limitations identified in the initial setup. The en- hanced system incorporates a new localization approach based on Qualisys’ motion cap- ture (mocap) technology, which offers superior accuracy, reduced latency, and greater reliability compared to the previous method. Additionally, significant enhancements were introduced to the trajectory emulation system, with various alternative approaches evaluated to address performance shortcomings. Initially, an improvement to the TEB local planner was proposed to enhance temporal performance by leveraging the G2O optimization framework used within the TEB_local_planner plugin. Efforts were made to refine its functionality by developing a custom edge to optimize the robot’s velocity while minimizing other redundant optimization parameters that could negatively im- pact its operation. While these modifications led to improvements in specific aspects of performance, they also introduced new challenges, including intermittent crashes within the navigation stack. To resolve these issues and ensure system stability, new global and local planners were developed, offering a more robust and reliable solution for trajectory emulation. This phase involved a series of targeted refinements designed to enhance trajectory 47 5. Implementation adherence, reduce localization issues, and streamline real-time data processing. With these optimizations, the refined system was expected to achieve seamless trajectory emulation, robust localization, and efficient path planning, delivering a more reliable and capable solution for emulating complex, dynamic trajectories. 5.5.1 Implementation of Qualisys (mocap) based Localization System This section details the implementation of the Qualisys localization system for mo- bile robot tracking. Qualisys’ motion capture (mocap) technology provides a highly ac- curate, real-time localization solution by utilizing an array of high-speed cameras to track reflective markers placed on the robot. By triangulating the positions of these markers, the Qualisys system tracks precisely the spatial data of these markers, enabling accur- ate determination of the robot’s location and orientation with minimal latency. This approach effectively addresses the limitations of previous localization methods, such as drift in localization data, induced oscillations, and high computational load. The current implementation of the Qualisys mocap system used an existing setup in the laboratory, where the system had already been installed and calibrated for ac- curate tracking. Reflective markers were strategically placed on various parts of the robot to define rigid bodies within the Qualisys Track Manager (QTM) software. This configuration allowed the mocap system to precisely localize the robot in real time. The arrangement of markers on the robot, along with a view of the camera placements within the room as seen in the QTM software, is illustrated in Figure 5.12. Figure 5.12.: Qualisys reflective marker placements on the robot (left) and mocap camera placements with robots (right) To integrate this localization method into the existing ROS environment, the ROS implementation of the Qualisys mocap software was imported and configured within the current ROS setup. This integration enabled seamless communication between QTM 48 5. Implementation server and the ROS-based navigation system, providing high-frequency, accurate pose data for the robot. The performance of this refined localization system was then analyzed using a distributed ROS setup on the host machine, where all data from the robot was visualized and monitored. 5.5.2 Development and Implementation of ‘Dummypath_planner’ To accurately emulate complex reference trajectories retrieved from Unity or collec- ted from a human VR user, it became evident that a dedicated global planner capable of returning the fixed, predefined reference path was necessary. While using external libraries like ‘follow_waypoints’ in combination with the standard global planner is a potential solution, this approach is suboptimal. Standard global planners are primarily designed to compute safe paths from a start to a goal position, making them excess- ive and ineffective for precise trajectory-following tasks. This limitation necessitated the exploration of alternative approaches to address the specific requirements of tra- jectory emulating. Among the alternatives considered, the ‘move_base_flex’ library, developed by the Magazino robotics group, stood out as a promising candidate. As an enhanced version of the widely used ‘move_base’ library for mobile robot navigation, ‘move_base_flex’ offers extended functionality, including the ability to follow exact ref- erence trajectories. This capability made it an attractive choice for improving trajectory emulating accuracy. However, a significant drawback of ‘move_base_flex‘ is its com- patibility only with ROS Noetic. Since the current system setup on the Jetson Nano runs ROS Melodic on Ubuntu 18.04, this incompatibility posed a challenge, making the implementation infeasible for the existing platform. As a result of the incompatibility, a custom solution was developed in the form of a new global planner plugin for the ROS navigation stack. This planner was built using the ‘base_global_planner’ plugin as a foundation and was specifically designed to handle reference trajectory data. The primary objective was to generate a global plan directly from predefined trajectory data, replicating the reference trajectory with pre- cision. The custom planner, named ‘dummypath_planner’, was implemented to read a sequence of poses from an external CSV file containing the pose data of the reference tra- jectory. These poses are defined relative to the standardized ‘scout_map‘ frame, which represents the center of the map where the mobile robot operates, ensuring seamless integration into the global plan. The main method in the ‘dummypath_planner’ code, ‘makePlan’, reads the CSV file line by line, parses the pose data, and constructs ‘geo- metry_msgs::PoseStamped’ messages. These messages are then appended to the plan vector, effectively creating a global navigation plan. This custom plugin enables the robot to accurately emulate complex trajectories by directly utilizing recorded reference trajectories as predefined paths. It allows for precise replication of intricate movement patterns, offering a tailored solution to meet the specific requirements of this study. An example of a global plan generated by the ‘dummypath_planner’ plugin is shown in Figure 5.13. 49 5. Implementation The custom global planner demonstrates an effective approach to path generation using pre-recorded trajectories. Its simplicity, compatibility with ROS, and capability to visualize paths make it an ideal solution for applications where actual planning of the paths for navigation is not the primary concern but where accurate trajectory emulation is essential. Figure 5.13.: Final global path generated by the Dummypath_planner plugin 5.5.3 Development and Implementation of ‘Dummy_local_planner’ Local planning is essential in robotic navigation, bridging global path planning and real-time control. Its primary role is to adapt the global path dynamically, generating safe, goal-directed velocity commands while avoiding collisions. Local planners can be holonomic, allowing motion in any direction, or non-holonomic, adhering to motion con- straints like wheeled robots. The presented planner is holonomic, enabling precise x, y, and angular motion along the Z direction for enhanced adaptability and maneuverability. After developing and implementing the ‘dummypath_planner’ global path planner, it was integrated with the TEB_local_planner, which remained as the local planner for the entire robotic system. While the TEB local planner exhibited highly accurate path-following capabilities, its temporal performance proved to be suboptimal. In certain scenarios, particularly when navigating paths requiring constricted maneuvers, the plan- ner caused the robot to get stalled. These issues arose from the TEB_local_planner’s tendency to adhere rigidly to the planned path, which created difficulties in efficiently completing trajectories. Furthermore, the extensive parameter configuration required to optimize the TEB_local_planner for the specific needs of the system added complexity and made fine-tuning cumbersome. These challenges highlighted the need for a more efficient and simplified local plan- ner. To address this, a PID-based local planner was proposed, as suggested by the literature review, due to its simplicity and efficiency in processing a larger number of 50 5. Implementation waypoints. Drawing inspiration from trajectory tracking algorithms reviewed in the lit- erature, a custom local planner, named ‘dummy_local_planner’, was developed using the ‘base_local_planner’ as its foundation, so that the proposed trajectory emulating algorithm integrates seamlessly with the ROS framework and the navigation stack, util- izing core components like tf2 for real-time pose transformations and costmap_2d for environmental mapping. The brief overview of the complete code is given below in the form of an algorithm 1 for reference. The PID-based local planner for the holonomic robot was designed with three in- dependent PID controllers. Two controllers managed error corrections relative to the x and y coordinates, while a third controller, a ProfiledPIDController, handled the ro- bot’s rotational dynamics. Since the rotational dynamics of a holonomic drivetrain are decoupled from translational movements in the x and y directions, this setup allowed the planner to provide custom heading references during trajectory emulating, ensuring effective path adherence. The errors calculated by these three PID controllers were used to generate command velocities (Twist messages), which were subsequently sent to the ‘move_base’ node. This configuration enabled the robot to replicate the reference tra- jectories with improved efficiency. To monitor and evaluate the robot’s path adherence, an RViz-based marker visualization system was incorporated. This system visualized the robot’s path based on navigation errors, offering valuable insights into the robot’s trajectory emulation capability and overall performance, which can be used for further debugging. This integrated approach proved to be both efficient and effective for precise trajectory emulation. The proposed local planner offers significant advantages for the current applica- tion when compared to standard ROS planners and traditional trajectory emulating algorithms. Popular planners like DWA and TEB are primarily designed to guide a robot along a global plan, and while they can be configured for trajectory emulation, this process requires extensive tuning and consideration of numerous factors every single time to adapt to every single trajectory, making it time-intensive. Conversely, controller- based trajectory emulation algorithms are highly effective but are often challenging to integrate with standardized systems like ROS and other robotic software frameworks. The proposed planner bridges this gap by providing a streamlined approach to tra- jectory emulation within the ROS ecosystem. While DWA and TEB excel in dynamic obstacle avoidance, the proposed planner is particularly well-suited for controlled envir- onments requiring precise and predictable trajectory emulation. This makes it ideal for the current application of indoor trajectory emulation, where precision and reliability are critical. 51 5. Implementation Algorithmus 1 Dummy Local Planner Navigation Algorithm 1: Initialize parameters, PID gains, and ROS components 2: function initialize(name, tf, costmap_ros) 3: Load costmap, transformation buffer, and pre-calculated velocities 4: Start control timers 5: Set initialized to true 6: end function 7: function setPlan(global_plan) 8: Load global_plan and reset trajectory variables 9: Set goal_reached to false 10: end function 11: function controlLoop 12: Compute velocity commands 13: Publish commands to cmd_vel 14: end function 15: function computeVelocityCommands(cmd_vel) 16: Update robot pose 17: if not at start pose then 18: Navigate to start using holonomic motion 19: if close to start pose then 20: Switch to PID-based local planning 21: end if 22: else 23: Compute PID control for waypoints 24: Update cmd_vel 25: if goal reached then 26: Stop robot and finalize path 27: end if 28: end if 29: end function 30: function computeHolonomicPIDControl(cmd_vel, dt) 31: Calculate errors and update velocities using PID 32: Ensure commands respect velocity limits 33: end function 34: function pathVisualization 35: Publish robot’s path 36: if recording path data then 37: Save path to CSV file 38: end if 39: end function 52 Chapter 6 Results This section presents a structured evaluation of the trajectory emulation frame- works that were developed. The evaluation process follows the protocol described in Sec- tion 4.3, and is carried out through both simulation and physical testing stages. First, the performance, theoretical effectiveness, and shortcomings of both frameworks are assessed in a simulated environment using ROS, Rviz, and Gazebo. The evaluation begins with analyzing the performance of the developed preliminary trajectory emulation framework by deploying it to traverse a reference trajectory obtained from a simulated VR user in Unity. Next, the framework is evaluated, with a focus on its results and limitations. This is followed by an evaluation of the improved Dummy_Trajectory_Emulation framework using the same reference trajectory data obtained from Unity, to compare and prove the effectiveness of this framework. Subsequently, the Dummy_Trajectory_Emulation framework is deployed to execute the reference trajectories collected from real human VR users, first in the simulation environment, and theoretical effectiveness is analyzed in this context. Finally, the framework is deployed using the physical robot, and its per- formance is evaluated by comparing the robot’s executed trajectory with reference data from real human VR users. All results are collected, analyzed, and discussed in detail to draw insights about the capabilities and limitations of the proposed frameworks. 6.1 Theoratical Experimentation and Results: The trajectory generated by the robot emulating the reference trajectory using one of the emulation frameworks within the simulation environment is recorded. These collected datasets—comprising both the reference and emulated trajectories—are then used to compare and evaluate the accuracy of the trajectory emulation achieved by the proposed framework. 6.1.1 Preliminary Emulation Framework using Simulated User trajectory: This framework was the initial approach to address the problem of trajectory emu- lation. This mainly involves the ROS Navigation stack, using the Global_planner and TEB local planner plugin along with an additional library called the follow_waypoint 54 6. Results library. In this framework, the complete trajectory of the simulated user is divided into a set of waypoints and the robot traverses these waypoints. The set of waypoints corres- ponding to this reference trajectory is illustrated in Figure 6.1. In the figure, each red arrow represents each of the waypoints that the robot is expected to traverse as a part of the trajectory emulation. Figure 6.1.: Waypoints corresponding to the reference trajectory of the simulated user (Red Arrows). The performance of the preliminary trajectory emulation algorithm, is visually rep- resented in the graphs below. The first graph Figure 6.2 presents the comparison between reference path and the traversed trajectory. The second set of graphs Figure 6.3 show the positions of the robot in the simulated environment against time. As seen in the graphs, the algorithm fails to perfectly align with the ideal reference trajectory within the simu- lation environment. Furthermore, it fails to fully meet the temporal constraints defined by the reference trajectory. A detailed quantitative analysis of these discrepancies is provided in the subsequent paragraph. Another key limitation of this framework is the substantial effort required for tuning and configuring the involved plugins to achieve op- timal performance, which consequently reduces the system’s adaptability and scalability. The plugin configuration files are provided in the appendix for reference. 55 6. Results Figure 6.2.: The path tracking performance with reference path (blue) and path traced by the robot(red). Figure 6.3.: The recorded verses the references trajectories with respect to coordinates the simulated robot traverses: X-axis (left), Y-axis (right), and theta-Z (center). 56 6. Results Further, to provide a statistical evaluation the protocols and the metrics outlined in Section 4.3, is used to evaluate the system’s performance. One of the key metrics, the Mean Distance to the Goal (Mgd), is used to assess how accurately the robot follows the reference trajectory in terms of its X and Y coordinates. The calculated Mgd is determ- ined to be 0.0233 meters, which, while relatively low- representative of close emulation of the reference trajectory, is not as low as expected in a simulated environment. This result indicates that the robot closely follows the reference trajectory within the simula- tion but does not achieve the desired level of accuracy, highlighting the need for a more refined approach. Additionally, the Mean Heading Deviation (Mhd) is also analyzed to evaluate the robot’s accuracy in following the reference trajectory in terms of angular orientation along the Z-axis. The calculated Mhd was found to be 0.0697 radians, also a low value. Discrepancies were observed in the temporal alignment between the recorded and reference data. This can be seen from the graphs shown above. The difference between the time associated with the reference trajectory and the actual time taken by the robot to complete the path is found to be +7.72 seconds, indicating that the robot required an additional 7.72 seconds to traverse the entire reference trajectory. These findings underscore the need for a more robust trajectory emulation framework that can address the observed shortcomings and enhance overall performance. 6.1.2 Dummy_Trajectory_Emulation Framework using Simulated User Trajectory: The trajectory from the simulated user in Unity is used again as the reference trajectory to evaluate the performance of the developed Dummy_Trajectory_Emulation trajectory emulation framework. The reference path is illustrated in Figure 6.4 for reference. In the figure, the red path represents the path of the reference trajectory. The green path on the right shows the actual path followed by the robot during emulation. Figure 6.4.: Reference path (red), path traversed by the robot during emulation (green) 57 6. Results The reference trajectory is imported into the ROS environment using a custom global planner, dummypath_planner, which publishes a static path for the robot to follow. This path serves as the reference trajectory that the robot is expected to emu- late. To accurately replicate the simulated user’s movement, a custom local planner, dummy_local_planner, is employed to control the robot’s motion along this predefined path. The graphs below provide a visual representation of the performance of the de- veloped trajectory emulation framework. The results indicate that, within the simulated environment, the framework effectively replicates the reference trajectory, closely match- ing the ideal behavior. The first graph Figure 6.5 showcases the algorithm’s accuracy in emulating the reference path, while the second set of graphs Figure 6.6 evaluate the system’s performance along the X, Y, and Theta axes with respect to time. Based on these results, it is evident that the developed Dummy_Trajectory_Emulation frame- work demonstrates better performance compared to the preliminary trajectory emula- tion framework, both in terms of path replication accuracy and compliance with the temporal constraints of the reference trajectory. Figure 6.5.: Reference path (blue) and robot emulating trajectory (red) Figure 6.6.: The recorded verses the references trajectories with respect to coordinates the simulated robot traverses: X-axis (left), Y-axis (right). 58 6. Results Figure 6.7.: The recorded verses the references trajectories with respect to coordinates the simulated robot traverses: ThetaZ-axis. The calculated Mean Distance to the Goal (Mgd) was 7.27×10−5 meters, indicating a very low positional error. This result demonstrates that the robot closely follows the reference trajectory in the simulated environment, confirming the high positional accur- acy of the developed framework. In addition, the Mean Heading Deviation (Mhd) was evaluated to measure the robot’s ability to maintain correct angular orientation along the Z-axis. The Mhd value was found to be 0.046 radians, which is also notably low. This indicates that the robot accurately replicates the angular orientation of the reference tra- jectory, further validating the effectiveness of the trajectory emulation framework. The timing of the recorded and reference trajectories also seems to be closely synchronized, the reference trajectory spans 100.58 seconds, while the recorded trajectory duration is 100.205 seconds, resulting in a temporal difference of -0.38 seconds. Meaning that the robot reaches the final goal 0.38 seconds earlier than the reference. The graph above visually demonstrates the temporal correspondence between the recorded and reference trajectory data, showing that the robot reaches all the designated waypoints at nearly the expected times. These findings confirm the robot’s capability to accurately track the intended trajectory both spatially and temporally. 6.1.3 Dummy_Trajectory_Emulation Framework using Human VR User trajectory: In this section, the performance of the developed Dummy_Trajectory_Emulation framework is evaluated within the simulated environment using the reference trajectories collected from real human VR users. As previously discussed, trajectory data from actual human VR users was collected using Qualisys motion capture technology and a total of 100 trajectories were recorded using this method. The Dummy_Trajectory_Emulation framework was employed to execute all recorded trajectories within the simulated envir- onment, and its performance was thoroughly analyzed. For graphical analysis, trajectory number ninety, was randomly selected. The path of this trajectory is shown in Figure 6.8 for reference, the red path represents the reference trajectory, while the green path in- dicates the trajectory traversed by the robot. The remaining trajectories have been 59 6. Results evaluated exclusively using statistical analysis techniques, and the corresponding results are presented in Appendix A.1. Figure 6.8.: Reference human VR user’s trajectory number ninty (red) and robot tra- jectory emulating stack (green) The robot follows the reference trajectories, and its pose data is recorded throughout the process. The recorded trajectory data is then compared with the reference trajectory, with the results analyzed using both graphical and statistical methods. The framework’s performance on one of the trajectories is presented in Figures 6.9 and 6.10. These graphs provide a visual representation of the effectiveness of the developed trajectory emulation framework. The results indicate that the framework’s performance in the simulated environment closely aligns with the expected behavior by accurately following the trajectory, even when emulating actual human VR user trajectory data. Figure 6.9.: Reference path ninety (blue) and robot emulating trajectory using developed algorithm (red) 60 6. Results Figure 6.10.: Recorded versus references trajectories with respect to coordinates the sim- ulated robot traverses: X-axis (left), Y-axis (right), and theta-Z (center). The system’s performance was evaluated using the Mean Distance to the Goal (Mgd) and Mean Heading Deviation (Mhd) metrics. First, the Mgd metric, was calculated to be 6.05 × 10−6 meters. This low value indicates that the robot effectively tracks even the real human user’s reference trajectory. Subsequently, the calculated Mhd was found to be 0.000179 radians, thus highlighting the robot’s ability to closely replicate the angular orientation of the reference trajectory, further validating the effectiveness of the developed framework. In addition to this, statistical analysis was performed on all the other recorded human trajectories. The results of the statistical analysis is presented in the appendix for reference A.1. The findings demonstrate the robot’s capability to emulate the required trajectory with both high temporal and positional accuracy. 6.2 Practical Experimentation and Results: The performance of the developed algorithm was finally evaluated by deploying the Dummy_trajectory_emulation framework on a physical robot and recording its executed trajectory. For this evaluation, reference trajectory data from that real VR user is used ensuring it closely resembles the actual trajectory that the system is intended to replicate. 61 6. Results 6.2.1 Human VR User trajectory: To ensure consistency and enable a direct comparison of the algorithm’s perform- ance between the simulated and real-world environments, the analysis in this section is also performed on motion trajectory number ninety. Statistical analysis is performed across all collected trajectories and is presented in the appendix A.2. The figure below Figure 6.11 includes snapshots of the physical system in operation and the path followed by the robot during this experiment for reference. Figure 6.11.: Robot emulating the reference trajectory (Left), RViz Visualization of the reference (Red) and Emulated (Green) Trajectories (Right) The Mean Distance to Goal (Mgd) and the Mean Heading Deviation (Mhd). In the physical setup, the Mgd was measured at 0.00544, while the Mhd value was measured at 0.01546 radians. While these values are slightly higher than the ideal performance, still reflects effective spatial adherence to the reference path. The observed variation is expected given the constraints of the real-world conditions, where physical factors are more pronounced. The graphs shown in Figure 6.13, depicts a slight temporal lag in the robot’s path when compared to the reference. This was primarily due to physical con- straints, such as the robot’s acceleration limits and its inertia, which naturally influence real-time motion control. Despite these factors, the system maintained consistent and stable behavior throughout the execution of the trajectory. Finally, these results suggest that the current framework provides a strong found- ation for accurate trajectory emulation in real-world scenarios. To further enhance performance, especially in complex segments of the trajectory, future work may consider refining the local planner to better accommodate the robot’s physical properties, such as inertia and acceleration constraints. Additionally, hybrid planning strategies that balance computational simplicity with improved physical modeling may offer promising avenues for future development. Overall, the results indicate that the proposed algorithm provides an effective approach to trajectory emulation, demonstrating reliable perform- 62 6. Results ance in both simulated and real-world environments. The findings provide meaningful insights for future improvements. Figure 6.12.: The reference path ninety is shown in blue and the path followed by the robot using the developed Dummy_Trajectory_Emulation algorithm is shown in red. Figure 6.13.: Recorded verses references trajectories with respect to coordinates the sim- ulated robot traverses: X-axis (left), Y-axis (right), and theta-Z (center). 63 Chapter 7 Discussion As CoboDeck aims to integrate collaborative mobile manipulators to deliver en- countered type haptic feedback within VR environments, ensuring user safety during testing phase emerges as a critical challenge. The current study focuses on developing a robust and standardized framework for trajectory emulation using a holonomic robot, with the goal of providing a supporting test platform for the CoboDeck project. The study introduces a dummy robot that emulates VR user’s movement, and thus allowing for safe and controlled early-stage testing without direct human involvement. Although prior studies have explored human movement and trajectory emulation strategies, these approaches often lack applicability in complex physical environments. This research seeks to bridge that gap by tackling key challenges, mainly focusing on the implement- ation of a dependable trajectory emulation framework which assures both spatial and temporal accuracy in emulating trajectories. To this end, a ROS-based architecture was adopted to support modular development, reproducible testing, and smooth integ- ration with the existing robotic systems. The resulting framework offers a scalable and practical solution for enabling safe, precise, and realistic trajectory emulation. To address the requirements of trajectory emulation, an initial framework was de- veloped using the ROS Navigation Stack in conjunction with the follow_waypoints lib- rary. The navigation stack was configured with the global_planner, the TEB local planner, and both global and local costmaps to help in the robot’s path traversal. The recorded trajectory data was converted into a sequence of waypoints and processed through the follow_waypoints library. This configuration provided a simple initial solu- tion, balancing high-level path planning with local real-time trajectory corrections. This allowed the robot to iteratively plan and follow the trajectory by navigating through the specified waypoints. Modifications were made to the follow_waypoints implementation to enable planning ahead for upcoming waypoints, thereby reducing unnecessary re- planning and facilitating smoother and more continuous robot motion. For real-world deployment, the system was extended with a marker-based localization approach using an Intel RealSense tracking camera. This method, previously validated in the CoboDeck platform, was adapted to the Scout Mini robot, providing real-time and accurate pose estimation within an indoor environment. 65 7. Discussion Despite successful deployment and evaluation of the initial framework—both in simulation and on the physical system—several limitations were identified. While the robot demonstrated reliability in traversing the reference path, achieving precise tem- poral alignment with the reference trajectory remained a significant challenge. The sys- tem’s components required to be extensively configured, including the global_planner, TEB_local_planner, and costmaps for each individual trajectory. This process was time-consuming and negatively impacted the scalability and adaptability of the frame- work. Adding to this, the global planning through the waypoints was an additional unnecessary computational load, given that the path of the reference trajectory was already complete and predefined. During the initial real-world testing, localization per- formance also emerged as a limiting factor. Latency in the marker-based tracking system led to waypoint overshooting and induced oscillatory corrections in the robot’s motion. Furthermore, the Intel RealSense camera’s IMUs proved sensitive to abrupt directional changes, which occasionally degraded the localization system’s reliability. Additional shortcomings were observed when executing trajectories involving sharp or closely spaced turns. The TEB_local_planner, while capable of generating smooth and time-optimal paths, exhibited difficulties in handling such scenarios, occasionally causing the robot to stall. These findings underscored the need for a more robust and efficient trajectory emulation algorithm capable of delivering consistent spatial and temporal performance. To address the limitations identified in the initial trajectory emulation framework, a refined system was implemented featuring improvements in localization, global plan- ning, and local trajectory control. Each component of the system was redesigned with a focus on enhancing accuracy, responsiveness, and ease of integration within the ex- isting ROS-based architecture. A major enhancement was the integration of the robot with the Qualisys motion capture system for localization. In contrast to the previ- ously used marker-based tracking system—which suffered from latency, drift, and high computational overhead—the Qualisys system delivers highly accurate, low-latency pose estimation. This real-time tracking capability enabled precise real time determination of the robot’s position and orientation, effectively eliminating the limitations observed in the earlier setup. The refinement was also focused on developing a custom global planner, i.e. ‘dummypath_planner’, to address the limitations of standard goal-based path planning methods. This planner reads a predefined sequence of poses from a CSV file and generates a fixed global plan that closely replicates the path of the reference trajectory. This approach not only ensures accurate path replication but also maintains compatibility with the ROS navigation stack, streamlining the path planning process. Subsequently, to improve real time control of the robot, particularly in scenarios where the TEB local planner exhibited suboptimal behavior, a new local planner named dummy_local_planner was developed and implemented. This planner employs three independent PID controllers to regulate motion along the x, y, and rotational axes. Its holonomic design supports fine control and better handling of sharp turns and rapid directional changes. Furthermore, the simplicity of the control structure minimized the need for extensive tuning, thereby increasing the system’s usability and adaptability. In summary, the refined trajectory emulation framework introduces a robust and modular 66 7. Discussion solution that overcomes the primary challenges of the initial frameworks implementation. This architecture demonstrates strong potential for scalable and accurate trajectory emulation in structured indoor environments. The results from the performance evaluation of the Dummy_Trajectory_Emulation framework demonstrate significant progress toward the primary objective of accurately replicating the motion of a VR user using a holonomic mobile robot. The developed system exhibited high levels of both positional and temporal accuracy when emulating predefined reference trajectories within a simulated environment. Graphical analysis conducted within the simulated environment revealed that the proposed algorithm con- sistently maintained minimal deviation from the reference path while accurately adher- ing to the temporal constraints across all trajectory components—namely, the X, Y, and θz coordinates. A detailed statistical evaluation of the framework’s performance is presented in the appendix A.1. The results indicate consistent low error margins under controlled simulation conditions, thereby reinforcing the reliability and robustness of the proposed trajectory emulation approach. Collectively, these findings demonstrate the theoretical soundness of the framework and its capability to replicate complex hu- man motion with high precision in simulated scenarios. This set a solid foundation for further testing and deployment in real-world scenarios. While the simulation results closely aligned with expectations, the transition to a physical setup introduced some observable deviations due to real-world factors. The system maintained good positional accuracy; however, its performance was influenced by the robot’s physical constraints, including acceleration limits and inertia, which af- fect real-time motion control. A comprehensive statistical evaluation of the framework’s real-world performance is provided in Appendix A.2. Despite the presence of some variations, the proposed algorithm demonstrated reliable behavior and consistent tra- jectory emulation in both simulated and physical environments, affirming its practical applicability under realistic operating conditions. To further enhance performance, es- pecially in dynamic segments of the trajectory, future work may consider refining the local planner to better accommodate the robot’s physical properties, such as inertia and acceleration constraints. Additionally, hybrid planning strategies that balance compu- tational simplicity with improved physical modeling may offer promising avenues for future development. Overall, the developed framework establishes a solid foundation for trajectory emulation tasks and presents strong potential for integration into interactive robotic systems where accuracy, reliability, and real-world adaptability are essential. The findings of this study contribute to the fields of mobile robotics, with a par- ticular focus on trajectory emulation frameworks for holonomic mobile robots. This work extends prior efforts by shifting from traditional controller-based approaches to a ROS-based implementation that leverages widely adopted middleware and its supporting libraries. This transition enhances system modularity and flexibility, enabling easier in- tegration with a range of robotic platforms and facilitating future scalability. In contrast to traditional solutions like trajectory tracking that emphasize trajectory simplification through excessive smoothing to avoid abrupt changes, the proposed framework adopts a 67 7. Discussion different strategy by minimizing the degree of smoothing, the system preserves essential features of the original trajectory, allowing for more accurate replication of the reference path and improved realism in emulation. While minor limitations were observed dur- ing the physical implementation, the system overall demonstrated efficient and reliable trajectory emulation. These contributions establish a strong foundation for the contin- ued development of precise and robust trajectory emulation systems, reinforcing their potential application in interactive, dynamic, and human-centric robotic environments. 68 Chapter 8 Conclusion This thesis has presented the design, development, and validation of the MRVR Robotic Dummy User—an omnidirectional mobile robot capable of accurately emulating human trajectories within interactive VR environments. The motivation behind this work was rooted in the need for a safe and reliable testing platforms in collaborative robotics, particularly in scenarios where human interaction with mobile manipulators, such as the CoboDeck system, poses safety and logistical challenges during early-stage development. To address this, a robust trajectory emulation framework was developed, lever- aging the capabilities of the Robot Operating System (ROS). The system integrates advanced localization techniques, including a Qualisys motion capture setup, along with custom-developed ROS-compatible modules such as the dummypath_planner and dummy_local_planner. The framework enables high-fidelity replication of VR user tra- jectories in both simulated and physical environments, achieving a mean positional error in the order of 1 × 10−3 m and maintaining low-latency performance maintained below 100ms in the physical environment. Experimental evaluations confirm the system’s capability to emulate complex, time- sensitive human movement patterns with precision and repeatability. Despite the suc- cesses, certain limitations were identified—particularly regarding localization drift in real-world settings. Future work can build directly upon these results by exploring en- hanced trajectory interpolation techniques, more sophisticated time-alignment strategies for trajectory execution, and extended testing across a wider variety of user-generated paths and interaction scenarios. Additionally, refining the integration between traject- ory planners and motion controllers could improve responsiveness and accuracy during sharp or rapid movements, further aligning the dummy user’s motion with real human behavior in VR. In conclusion, the MRVR Robotic Dummy serves as a foundational step toward safer, efficient development of trajectory emulation robotic system. It also offers a repeat- able, controlled testing platform that mitigates risks and reduces dependency on human subjects during the crucial stages of system calibration and evaluation-thus contributing to the advancement of collaborative robotics in immersive virtual environments. 70 List of Figures 2.1. CoboDeck: (a) User interacting with a virtual wall (b) Mobile cobot presents prop to provide haptic feedback. . . . . . . . . . . . . . . . . . . 11 2.2. Reference frames of different rigid bodies . . . . . . . . . . . . . . . . . . 12 2.3. Overview of Wheel types in Mobile robotics . . . . . . . . . . . . . . . . 13 2.4. Elastic Band algorithm illustration . . . . . . . . . . . . . . . . . . . . . 16 5.1. Scout Mini by AgileX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 5.2. Seeed Studio’s reComputer J101 - Jetson Nano Developer Kit . . . . . . 34 5.3. Ardunio Uno Rev3 (left) and Ultrasonic sensor-HC_SR04 (Right) . . . . 35 5.4. Marker placement on the ceiling of the room . . . . . . . . . . . . . . . . 37 5.5. Structural design modification phases: Phase 1 (left), Phase 2 (center), and Dummy VR User design (right) . . . . . . . . . . . . . . . . . . . . . 39 5.6. Final simulation setup in Rviz (left) and Gazebo (right) . . . . . . . . . . 40 5.7. Tracking camera placement on the robot (left) and localization circuit (right) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 5.8. Electrical connection setup for Arduino and proximity sensor . . . . . . . 43 5.9. Proximity sensor placement (left) and robot URDF representation (right) 43 5.10. Hardware setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 5.11. Comparative simulation showing the reference robot (green marker) rep- licating the trajectory and the actual robot executing it . . . . . . . . . . 45 5.12. Qualisys reflective marker placements on the robot (left) and mocap cam- era placements with robots (right) . . . . . . . . . . . . . . . . . . . . . . 48 5.13. Final global path generated by the Dummypath_planner plugin . . . . . 50 6.1. Waypoints corresponding to the reference trajectory of the simulated user (Red Arrows). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 6.2. The path tracking performance with reference path (blue) and path traced by the robot(red). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56 6.3. The recorded verses the references trajectories with respect to coordinates the simulated robot traverses: X-axis (left), Y-axis (right), and theta-Z (center). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56 6.4. Reference path (red), path traversed by the robot during emulation (green) 57 6.5. Reference path (blue) and robot emulating trajectory (red) . . . . . . . . 58 6.6. The recorded verses the references trajectories with respect to coordinates the simulated robot traverses: X-axis (left), Y-axis (right). . . . . . . . . 58 6.7. The recorded verses the references trajectories with respect to coordinates the simulated robot traverses: ThetaZ-axis. . . . . . . . . . . . . . . . . . 59 6.8. Reference human VR user’s trajectory number ninty (red) and robot tra- jectory emulating stack (green) . . . . . . . . . . . . . . . . . . . . . . . 60 72 List of Figures 6.9. Reference path ninety (blue) and robot emulating trajectory using de- veloped algorithm (red) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60 6.10. Recorded versus references trajectories with respect to coordinates the simulated robot traverses: X-axis (left), Y-axis (right), and theta-Z (center). 61 6.11. Robot emulating the reference trajectory (Left), RViz Visualization of the reference (Red) and Emulated (Green) Trajectories (Right) . . . . . . . . 62 6.12. The reference path ninety is shown in blue and the path followed by the robot using the developed Dummy_Trajectory_Emulation algorithm is shown in red. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 6.13. Recorded verses references trajectories with respect to coordinates the simulated robot traverses: X-axis (left), Y-axis (right), and theta-Z (center). 63 73 List of Tables 5.1. Comparison of Key Performance Parameters of the robots . . . . . . . . 33 5.2. Comparison of Computing Platforms for Robotics Applications . . . . . . 34 74 Acronyms HMD Head Mounted Display STEM Science, Technology, Engineering, and Mathematics MR Mixed Reality VR Virtual Reality ETHD Encountered Type Haptic Devices HRI Human Robot Interaction AMCL Adaptive Monte Carlo Localization ROS Robot Operating System Rviz ROS Visualization URDF Unified Robotics Description Format CAN Controller Area Network TEB Timed Elastic Band MCL Monte Carlo Localization IMU Inertial Measurement Unit AI Artificial Intelligence PID Proportional Integral Derivative Control AMR Autonomous Mobile Robot EKF Extended Kalman Filter UKF Unscented Kalman Filter RRT Rapidly-exploring Random Trees PRM Probabilistic Roadmaps MPC Model Predictive Control 75 PID Proportional Integral Differential SMC Sliding Mode Control VFH Vector Field Histogram DWA Dynamic Window Approach TEB Timed-Elastic Band ROS Robot Operating System DDS Data Distribution Service GPI Generalized Proportional-Integral APF Artificial Potential Field QTM Qualisys Track Manager 76 Chapter A Appendix A.1 TebLocalPlannerROS Configuration base_local_planner : teb_local_planner / TebLocalPlannerROS TebLocalPlannerROS : odom_topic : /odom map_frame : / scout_map # Trajectory teb_autosize : True dt_ref : 0.2 dt_hysteresis : 0.1 min_samples : 3 max_samples : 50 controller_frequency : 20.0 max_global_plan_lookahead_dist : 1.0 global_plan_prune_distance : 0.1 force_reinit_new_goal_dist : 0.0 global_plan_viapoint_sep : 0.2 # Robot max_vel_x : 3 max_vel_x_backwards : 3 max_vel_y : 2.7 max_vel_trans : 2.7 max_vel_theta : 3.14 acc_lim_x : 2.7 acc_lim_y : 2.7 acc_lim_theta : 3.14 min_turning_radius : 0 wheelbase : 0.0 cmd_angle_instead_rotvel : False holonomic_robot : True 77 A. Appendix # Robot footprint settings footprint_model : vertices : [[ -0.35 , -0.3], [ -0.35 , 0.3] , [0.35 , 0.3] , [0.35 , -0.3]] type: polygon # Optimization no_inner_iterations : 10 no_outer_iterations : 8 optimization_activate : True penalty_epsilon : 0.1 weight_kinematics_nh : 0.0 weight_acc_lim_theta : 0.3 weight_acc_lim_x : 0.3 weight_acc_lim_y : 0.3 weight_adapt_factor : 0.0 weight_inflation : 0.0 weight_kinematics_forward_drive : 0.0 weight_kinematics_turning_radius : 50.0 weight_max_vel_theta : 0.3 weight_max_vel_x : 0.3 weight_max_vel_y : 0.3 weight_shortest_path : 10.0 weight_optimaltime : 50 weight_obstacle : 0 weight_viapoint : 15 weight_sync_with_dummy : 1.0 # Obstacle min_obstacle_dist : 0.0 inflation_dist : 0.0 enable_multithreading : False include_costmap_obstacles : False include_dynamic_obstacles : False obstacle_poses_affected : 40 legacy_obstacle_association : False # Goal tolerance xy_goal_tolerance : 0.2 yaw_goal_tolerance : 0.5 free_goal_vel : True A.2 GlobalPlanner Configuration base_global_planner : global_planner / GlobalPlanner 78 A. Appendix GlobalPlanner : use_quadratic : True orientation_mode : 1 orientation_window_size : 5.0 allow_unknown : True planner_window_x : 3.0 planner_window_y : 3.0 default_tolerance : 0.45 cost_factor : 2.0 inflation_radius : 0.3 # Frequency Parameters controller_frequency : 40.0 controller_patience : 6.0 planner_frequency : 5.0 planner_patience : 5.0 # Safety Parameters conservative_reset_dist : 1.0 recovery_behavior_enabled : False clearing_rotation_allowed : False shutdown_costmaps : False oscillation_timeout : 0.0 oscillation_distance : 0.3 max_planning_retries : 3 # Recovery Behaviors recovery_behaviors : - name: conservative_reset type: clear_costmap_recovery / ClearCostmapRecovery - name: soft_reset type: clear_costmap_recovery / ClearCostmapRecovery conservative_reset : reset_distance : 3.0 layer_names : [ range_sensor_layer ] soft_reset : reset_distance : 1.5 layer_names : [ range_sensor_layer ] 79 A. Appendix Algorithmus 2 Follow Waypoint Navigation System (Part 1) 1: Initialize ROS node follow_waypoints 2: procedure ChangePose(waypoint, target_frame) 3: if waypoint.frame == target_frame then 4: return waypoint 5: end if 6: Transform waypoint to target_frame 7: return transformed pose 8: end procedure 9: procedure FollowPath(waypoints) 10: Initialize move_base action client 11: for each waypoint in waypoints do 12: Send move_base goal 13: while goal not reached do 14: Check robot position 15: if distance to goal ≤ tolerance then 16: Cancel goal and move to next waypoint 17: end if 18: end while 19: end for 20: return success 21: end procedure 22: procedure GetPath( ) 23: Initialize mode as csv 24: Listen for keypress to toggle between csv and manual modes 25: if mode is manual then 26: Subscribe to /initialpose to add waypoints 27: else 28: Load waypoints from CSV file 29: end if 30: end procedure 31: procedure LoadCSVAndFollowPath( ) 32: Read waypoints from CSV file 33: Publish waypoints as PoseArray 34: return success 35: end procedure 36: procedure PathComplete( ) 37: Log Path Completed 38: return success 39: end procedure 80 A. Appendix Algorithmus 3 Follow Waypoint Navigation System (Part 2) 1: procedure Main( ) 2: Create state machine sm 3: Add states: 4: - GET_PATH (Choose between csv or manual mode) 5: - LOAD_CSV (Load waypoints from CSV) 6: - FOLLOW_PATH (Follow waypoints) 7: - MANUAL_MODE (Collect and follow manual waypoints) 8: - PATH_COMPLETE (Log completion and restart process) 9: Execute state machine 10: end procedure A.3 Statistical Evaluation the Dummy_Emulation_Framework’s Performance in simulation Serial No. Mgd (Meters) Mhd (Radians) Time Difference (Seconds) 1 3.04 × 10 − 5 0.000609 0.02 2 2.88 × 10 − 6 0.000433 0.01 3 1.08 × 10 − 5 0.000119 0.07 4 9.30 × 10 − 6 0.00106 0.05 5 1.43 × 10 − 5 0.000199 0.03 6 1.48 × 10 − 5 0.00017 -0.02 7 2.78 × 10 − 5 0.000461 0.02 8 2.61 × 10 − 6 0.000992 0.02 9 2.26 × 10 − 5 0.000498 -0.04 10 2.33 × 10 − 5 0.000189 0.01 11 2.64 × 10 − 5 0.00057 0.0 12 1.65 × 10 − 5 0.000907 0.01 13 7.10 × 10 − 6 0.0011 0.06 14 1.78 × 10 − 5 0.00127 0.02 15 4.08 × 10 − 6 0.000182 0.03 16 4.12 × 10 − 5 0.000301 0.02 17 5.86 × 10 − 6 0.000394 -0.02 18 1.46 × 10 − 5 0.000672 -0.02 19 1.11 × 10 − 5 0.000352 0.04 20 2.61 × 10 − 6 0.00121 0.06 21 6.29 × 10 − 6 0.000358 0.0 22 4.73 × 10 − 6 0.00027 0.0 (Continued on next page) 81 A. Appendix Serial No. Mgd (Meters) Mhd (Radians) Time Difference (Seconds) 23 1.61 × 10 − 6 0.00314 0.05 24 6.29 × 10 − 6 0.000211 0.01 25 1.51 × 10 − 5 0.00431 -0.01 26 4.80 × 10 − 6 0.000553 -0.02 27 5.72 × 10 − 6 0.000649 0.04 28 6.85 × 10 − 6 0.00119 0.02 29 1.28 × 10 − 5 0.000242 0.05 30 1.50 × 10 − 5 8.96e-05 0.01 31 3.20 × 10 − 6 0.000215 0.03 32 6.56 × 10 − 6 0.000236 0.0 33 4.34 × 10 − 6 0.0295 0.01 34 1.34 × 10 − 5 0.000276 0.01 35 2.85 × 10 − 5 0.00129 0.0 36 8.06 × 10 − 6 0.00148 0.0 37 4.47 × 10 − 6 0.000195 0.01 38 7.75 × 10 − 6 8.79e-05 0.02 39 4.15 × 10 − 6 0.000889 -0.03 40 4.99 × 10 − 6 0.000615 -0.04 41 2.03 × 10 − 6 0.00016 -0.02 42 4.38 × 10 − 5 0.000329 0.01 43 4.68 × 10 − 6 0.000373 -0.03 44 9.99 × 10 − 6 0.000106 0.03 45 2.11 × 10 − 5 0.00052 0.0 46 1.19 × 10 − 5 0.000212 -0.01 47 1.94 × 10 − 5 0.000353 0.02 48 4.19 × 10 − 5 0.000229 0.06 49 2.90 × 10 − 6 7.45e-05 -0.03 50 5.22 × 10 − 6 0.000222 0.0 51 4.68 × 10 − 6 0.00238 0.04 52 1.98 × 10 − 5 0.00032 0.02 53 2.99 × 10 − 5 0.000992 0.01 54 4.67 × 10 − 6 0.000106 0.03 55 3.16 × 10 − 6 0.000792 0.0 56 2.80 × 10 − 6 0.00114 0.03 57 3.76 × 10 − 6 0.000416 0.01 58 3.05 × 10 − 6 0.000183 0.05 59 4.57 × 10 − 6 0.00129 0.05 60 nan nan nan 61 6.79 × 10 − 6 8.76e-05 0.0 62 5.45 × 10 − 6 0.00127 0.01 63 8.42 × 10 − 6 0.000727 0.01 (Continued on next page) 82 A. Appendix Serial No. Mgd (Meters) Mhd (Radians) Time Difference (Seconds) 64 1.47 × 10 − 5 0.000339 0.04 65 5.60 × 10 − 6 0.0004 0.05 66 1.86 × 10 − 6 0.00053 0.04 67 4.41 × 10 − 6 0.000348 0.04 68 1.44 × 10 − 5 0.000538 0.04 69 3.39 × 10 − 6 0.00163 0.03 70 7.85 × 10 − 6 0.000574 0.03 71 1.85 × 10 − 5 0.00101 0.03 72 2.46 × 10 − 6 0.000723 0.01 73 1.96 × 10 − 5 0.000935 0.02 74 3.30 × 10 − 6 0.000683 -0.02 75 2.16 × 10 − 5 0.00477 0.02 76 1.61 × 10 − 5 0.000605 0.03 77 8.65 × 10 − 6 0.00131 0.0 78 8.05 × 10 − 6 0.000142 0.01 79 1.39 × 10 − 5 0.00108 0.0 80 5.34 × 10 − 6 0.00174 -0.01 81 1.48 × 10 − 5 0.00078 -0.01 82 8.59 × 10 − 6 0.00093 0.03 83 4.27 × 10 − 6 0.00118 0.05 84 5.83 × 10 − 6 0.00137 0.04 85 2.00 × 10 − 5 0.00294 0.03 86 6.09 × 10 − 6 0.00096 0.0 87 4.29 × 10 − 6 0.00025 0.04 88 4.79 × 10 − 6 0.00021 0.03 89 8.90 × 10 − 6 0.0003 0.0 90 6.05 × 10 − 6 0.00018 0.02 91 1.32 × 10 − 5 0.00066 -0.01 92 1.26 × 10 − 5 0.00122 0.01 93 4.89 × 10 − 5 0.00062 0.03 94 6.80 × 10 − 6 0.00039 0.02 95 5.15 × 10 − 6 0.00243 0.01 96 6.21 × 10 − 6 0.00023 0.04 97 6.63 × 10 − 6 0.00032 0.03 98 5.17 × 10 − 6 0.00043 0.02 99 1.03 × 10 − 5 0.00057 0.02 100 8.15 × 10 − 6 0.00056 -0.01 101 8.04 × 10 − 6 0.00013 0.01 102 2.64 × 10 − 6 0.00995 -0.06 83 A. Appendix A.4 Statistical Evaluation the Dummy_Emulation_Framework’s Performance on the physical robot Serial No. Mgd (Meters) Mhd (Radians) Time Difference (Seconds) 1 0.00488 0.03774 0.08 2 0.000986 0.00348 -0.02 3 0.00638 0.00576 0.06 4 0.00397 0.00813 0.00 5 0.00756 0.00673 0.01 6 0.00459 0.00638 -0.01 7 0.00266 0.00753 0.00 8 0.00181 0.01843 0.02 9 0.01223 0.01267 0.02 10 0.00277 0.00272 -0.01 11 0.00174 0.00566 0.01 12 0.01257 0.01346 -0.01 13 0.00772 0.01514 0.05 14 0.00842 0.01149 0.02 15 0.00269 0.01486 0.03 16 0.00421 0.00688 0.02 17 0.00155 0.00709 0.03 18 0.00792 0.01480 0.08 19 0.00841 0.01101 -0.01 20 0.00232 0.03889 0.00 21 0.00423 0.00645 0.00 22 0.00405 0.00738 0.01 23 0.00061 0.00936 0.05 24 0.00335 0.03710 0.00 25 0.00371 0.05076 0.00 26 0.00076 0.00666 0.02 27 0.00390 0.01720 -0.01 28 0.00440 0.02252 0.02 29 0.00510 0.01189 0.06 30 0.00440 0.00343 0.01 31 0.00090 0.01161 0.04 32 0.00166 0.00620 0.01 33 0.00569 0.2711 -0.01 34 0.00205 0.0068 0.01 35 0.00599 0.03591 0.0 36 0.00344 0.0302 0.0 (Continued on next page) 84 A. Appendix Serial No. Mgd (Meters) Mhd (Radians) Time Difference (Seconds) 37 0.00185 0.0119 0.01 38 0.00060 0.00677 -0.01 39 0.00391 0.00709 -0.03 40 0.00228 0.01204 -0.05 41 0.00120 0.0134 -0.04 42 0.00212 0.0093 -0.01 43 0.00699 0.00691 0.03 44 0.00261 0.00510 0.03 45 0.00691 0.00814 0.02 46 0.00780 0.01467 0.02 47 0.00698 0.01507 0.02 48 0.00184 0.02848 0.00 49 0.00203 0.0096 -0.02 50 0.00291 0.0038 0.04 51 0.00397 0.00690 0.04 52 0.00883 0.01184 -0.03 53 0.00323 0.01087 0.01 54 0.00763 0.00981 0.03 55 0.00377 0.00845 0.00 56 0.00154 0.00631 0.03 57 0.00399 0.02056 0.02 58 0.00195 0.07979 0.00 59 0.00407 0.01376 0.01 60 nan nan nan 61 0.00591 0.01136 -0.25 62 0.00322 0.02136 0.04 63 0.00416 0.00662 -0.01 64 0.00667 0.01258 0.02 65 0.00695 0.00598 0.02 66 0.00136 0.00536 0.06 67 0.00225 0.00389 0.06 68 0.00201 0.00765 -0.09 69 0.00394 0.01852 0.03 70 0.00139 0.00868 -0.01 71 0.00815 0.01026 0.04 72 0.00888 0.01942 -0.03 73 0.00481 0.01345 0.01 74 0.00218 0.01136 -0.02 75 0.00766 0.34204 0.01 76 0.00342 0.00735 0.00 77 0.00675 0.01876 0.00 (Continued on next page) 85 A. Appendix Serial No. Mgd (Meters) Mhd (Radians) Time Difference (Seconds) 78 0.00520 0.00657 -0.01 79 0.00128 0.00684 0.00 80 0.00620 0.02480 -0.01 81 0.00488 0.00828 0.01 82 0.00433 0.02466 0.03 83 0.00210 0.00933 -0.02 84 0.00298 0.00618 -0.09 85 0.00596 0.01184 -0.00 86 0.00754 0.01729 -0.02 87 0.00138 0.00992 -0.01 88 0.00159 0.00621 -0.02 89 0.00298 0.01339 0.00 90 0.00544 0.01546 -0.03 91 0.00289 0.00674 0.02 92 0.00177 0.00455 0.02 93 0.00291 0.00376 0.00 94 0.00533 0.00629 0.00 95 0.00265 0.04256 0.02 96 0.00520 0.01275 -0.02 97 0.00558 0.00939 0.00 98 0.00749 0.00650 0.02 99 nan nan nan 100 nan nan nan 101 0.00142 0.00191 -0.14 102 0.00082 0.06244 0.00 86 Bibliography [1] T. Massie and K. Salisbury, “The PHANToM Haptic Interface: A Device for Prob- ing Virtual Objects.” ASME Winter Annual Meeting, 1994, p. pp. 295 300. [2] B. Jackson and L. Rosenberg, “Force feedback and medical simulation„” Interactive Technology and the New Paradigm for Healthcare, January, 1995. [3] K. Hirota and M. Hirose, “Development of surface display,” in Proceedings of IEEE Virtual Reality Annual International Symposium, 1993, pp. 256–262. [4] Y. Yokokohji, R. L. Hollis, and T. Kanade, “What you can see is what you can feel-development of a visual/haptic interface to virtual environment,” Proceedings of the IEEE 1996 Virtual Reality Annual International Symposium, pp. 46–53, 1996. [Online]. Available: https://api.semanticscholar.org/CorpusID:11886305 [5] S. Mortezapoor, K. Vasylevska, E. Vonach, and H. Kaufmann, “Cobodeck: A large- scale haptic vr system using a collaborative mobile robot,” in 2023 IEEE Conference Virtual Reality and 3D User Interfaces (VR), 2023, pp. 297–307. [6] S. Oberer and R. Schraft, “Robot-dummy crash tests for robot safety assessment,” 05 2007, pp. 2934 – 2939. [7] D. Herrera, F. Roberti, M. Toibero, and R. Carelli, “Dynamic emulation of human locomotion through mobile robots,” in 2016 IEEE Conference on Control Applica- tions (CCA), 2016, pp. 1111–1116. [8] M. Ou, H. Sun, Z. Zhang, and S. Gu, “Fixed-time trajectory tracking control for nonholonomic mobile robot based on visual servoing,” Nonlinear Dynamics, vol. 108, 03 2022. [9] M. Ou, H. Sun, Z. Zhang, and L. Li, “Fixed-time trajectory tracking control for mul- tiple nonholonomic mobile robots,” Transactions of the Institute of Measurement and Control, vol. 43, p. 014233122096641, 11 2020. [10] J. F. Nunamaker, M. Chen, and T. D. M. Purdin, “Systems development in in- formation systems research,” Journal of Management Information Systems, vol. 7, no. 3, pp. 89–106, 1990. [11] K. Siozios, E. Kosmatopoulos, and D. Soudris, CyberPhysical Systems: Decision Making Mechanisms and Applications. River publisher, 09 2022. [12] O. Grau, “Into the Belly of the Image: Historical Aspects of Virtual Reality,” Leonardo, vol. 32, no. 5, pp. 365–371, 10 1999. [Online]. Available: https://doi.org/10.1162/002409499553587 87 Bibliography [13] T. Emerson, “Mastering the art of vr: on becoming the hit lab cybrarian,” The Electronic Library, vol. 11, pp. 385–391, 1993. [14] P. Desai, P. Desai, K. Ajmera, and K. Mehta, “A review paper on oculus rift-a virtual reality headset,” ArXiv, vol. abs/1408.1173, 2014. [15] B. Kirsch, U. Schnepf, and I. Wachsmuth, “Robots and simulated environments- first steps towards virtual robotics,” Proceedings of 1993 IEEE Research Properties in Virtual Reality Symposium, pp. 122–123, 1993. [16] G. Gironimo, A. Marzano, and A. Tarallo, “Human robot interaction in virtual reality,” pp. 107–112, 2007. [17] G. Chen and J.-P. Chen, “Applying virtual reality to remote control of mobile robot,” vol. 123, pp. 383–390, 2017. [18] O. Liu, D. Rakita, B. Mutlu, and M. Gleicher, “Understanding human-robot inter- action in virtual reality,” 2017 26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), pp. 751–757, 2017. [19] X. Xie, Q. Lin, H. Wu, J. Adams, and B. Bodenheimer, “Immersion with robots in large virtual environments,” 2012 7th ACM/IEEE International Conference on Human-Robot Interaction (HRI), pp. 273–274, 2012. [20] F. Rubio, F. Valero, and C. Llopis-Albert, “A review of mobile robots: Concepts, methods, theoretical framework, and applications,” International Journal of Ad- vanced Robotic Systems, vol. 16, p. 172988141983959, 04 2019. [21] N. Koenig and A. Howard, “Design and use paradigms for gazebo, an open-source multi-robot simulator,” in IEEE/RSJ International Conference on Intelligent Ro- bots and Systems, Sendai, Japan, Sep 2004, p. 2149-2154. [22] Y. Maruyama, S. Kato, and T. Azumi, “Exploring the performance of ros2,” 10 2016, pp. 1–10. [23] D. Fox, W. Burgard, F. Dellaert, and S. Thrun, “Monte carlo localization: Efficient position estimation for mobile robots,” 01 1999, pp. 343–349. [24] L. Zhang, R. Zapata, and P. Lépinay, “Self-adaptive monte carlo localization for mobile robots using range sensors,” 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems, 2009. [25] L. Chen, P. Sun, G. Zhang, J. Niu, and X. Zhang, “Fast monte carlo localization for mobile robot,” vol. 144, pp. 207–211, 2011. [26] A. C. Almeida, A. H. R. Costa, and R. A. C. Bianchi, “Vision-based monte-carlo localization for humanoid soccer robots,” 2017 Latin American Robotics Symposium (LARS) and 2017 Brazilian Symposium on Robotics (SBR), pp. 1–6, 2017. [27] X. Zhong, Y. Zhou, and H. Liu, “Design and recognition of artificial landmarks for 88 Bibliography reliable indoor self-localization of mobile robots,” International Journal of Advanced Robotic Systems, vol. 14, p. 172988141769348, 02 2017. [28] F. Zafari, A. Gkelias, and K. K. Leung, “A survey of indoor localization systems and technologies,” IEEE Communications Surveys Tutorials, vol. 21, no. 3, pp. 2568–2599, 2019. [29] G. Lvov, M. Zolotas, N. Hanson, A. Allison, X. Hubbard, M. Carvajal, and T. Padir, “Mobile mocap: Retroreflector localization on-the-go,” 2023. [Online]. Available: https://arxiv.org/abs/2303.13681 [30] A. Pinto, A. Moreira, and P. Costa, “Indoor localization system based on artificial landmarks and monocular vision,” TELKOMNIKA, vol. 10, pp. 609–620, 12 2012. [31] O. Wijk and H. Christensen, “Extraction of natural landmarks and localization using sonars,” 07 1998. [32] A. Betourne and G. Campion, “Dynamic modelling and control design of a class of omnidirectional mobile robots,” in Proceedings of IEEE International Conference on Robotics and Automation, vol. 3, 1996, pp. 2810–2815 vol.3. [33] H. Sira-Ramirez, C. López-Uribe, and M. Velasco-Villa, “Linear observer-based active disturbance rejection control of the omnidirectional mobile robot,” Asian Journal of Control, vol. 15, 01 2013. [34] D. Cong, C. Liang, Q. Gong, X. Yang, and J. Liu, “Path planning and following of omnidirectional mobile robot based on b-spline,” in 2018 Chinese Control And Decision Conference (CCDC), 2018, pp. 4931–4936. [35] K. Kanjanawanishkul and A. Zell, “Path following for an omnidirectional mobile robot based on model predictive control,” in 2009 IEEE International Conference on Robotics and Automation, 2009, pp. 3341–3346. [36] K. Berntorp, B. Olofsson, and A. Robertsson, “Path tracking with obstacle avoid- ance for pseudo-omnidirectional mobile robots using convex optimization,” in 2014 American Control Conference, 2014, pp. 517–524. [37] M. V.-V. J.A. Vázquez, “Path tracking with obstacle avoidance for pseudo- omnidirectional mobile robots using convex optimization,” in IFAC Proceedings Volumes, vol. Volume 41, Issue 2, 2008, pp. 5365–5370. [38] H.-C. Huang and C.-C. Tsai, “Adaptive trajectory tracking and stabilization for omnidirectional mobile robot with dynamic effect and uncertainties,” IFAC Pro- ceedings Volumes, vol. 17, 07 2008. [39] D. Xu, D. Zhao, J. Yi, and X. Tan, “Trajectory tracking control of omnidirec- tional wheeled mobile manipulators: Robust neural network-based sliding mode approach,” IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cyber- netics), vol. 39, no. 3, pp. 788–799, 2009. 89 Bibliography [40] N. Amarasiri, A. Barhorst, and R. Gottumukkala, “Sharp curve trajectory tracking of a universal omni-wheeled mobile robot using a sliding mode controller,” ASME Letters in Dynamic Systems and Control, 2024. [41] J. Wang, J. Chen, S. Ouyang, and Y. Yang, “Trajectory tracking control based on adaptive neural dynamics for four-wheel drive omni-directional mobile robots,” 2014. [42] K. Besseghieur, R. Trebinski, W. Kaczmarek, and J. Panasiuk, “Trajectory tracking control for a nonholonomic mobile robot under ros,” Journal of Physics: Conference Series, vol. 1016, p. 012008, 05 2018. [43] J. F. M. Santiago, J. Fragoso-Mandujano, S. Gómez-Peñate, V. D. C. González, and F. López-Estrada, “Trajectory tracking and obstacle avoidance with turtlebot 3 burger and ros 2,” 2023 XXV Robotics Mexican Congress (COMRob), pp. 93–98, 2023. [44] N. D. Muñoz, J. A. Valencia, and N. Londoño, “Evaluation of navigation of an autonomous mobile robot,” ser. PerMIS ’07. Association for Computing Machinery, 2007, p. 15–21. [Online]. Available: https://doi.org/10.1145/1660877.1660878 [45] J. Rosenblatt, “Damn: A distributed architecture for mobile navigation,” Ph.D. dissertation, Carnegie Mellon University, Pittsburgh, PA, January 1997. [46] PerMIS ’07: Proceedings of the 2007 Workshop on Performance Metrics for In- telligent Systems. New York, NY, USA: Association for Computing Machinery, 2007. [47] AgileX Robotics, “Scout mini: High-speed 4-wheel drive mobile robot,” Product page, 2025, accessed: 2025-06-20. [Online]. Available: https: //global.agilex.ai/products/scout-mini [48] General Laser, “Mark1 mcnm,” Product page, 2025, accessed: 2025-06-20. [Online]. Available: https://www.general-laser.at/en/shop-en/mark-1-mcnm-en [49] Raspberry Pi Ltd, “Compute module 5,” Product page, 2024, product page, accessed 2025-06-20; officially announced on 27 November 2024. [Online]. Available: https://www.raspberrypi.com/products/compute-module-5/?variant=cm5-104032 [50] Intel, “Intel nuc 11 pro,” Product page, 2022, accessed: 2025-06-20; page dated 28 October 2022 in Intel content repository. [Online]. Available: https://www.intel. com/content/www/us/en/content-details/720882/intel-nuc-11-pro.html [51] NVIDIA, “Embedded jetson modules,” Webpage, NVIDIA Developer, 2025, accessed: 2025-06-20. [Online]. Available: https://developer.nvidia.com/embedded/ jetson-modules [52] Seeed Studio, “recomputer j101-v2 carrier board for jetson nano,” Product page, 2023, accessed: 2025-06-20; documentation states date created 23Feb2023, last updated 5May2023. [Online]. Available: https://www.seeedstudio. 90 Bibliography com/reComputer-J101-v2-Carrier-Board-for-Jetson-Nano-p-5396.html?srsltid= AfmBOorF6Idz10l8ZhPVeUnVTszwijoBUpQTD_GS_7LKsleUMoDkSm59 [53] Arduino, “Arduino uno rev3,” Product page, 2025, accessed: 2025-06-20. [Online]. Available: https://store.arduino.cc/products/arduino-uno-rev3?srsltid= AfmBOorJlGQpqBUWldcLXlCrqdXMI9hu9Bw6eYjyuqbD2vZJYKdcrpuk [54] Intel Corporation, “Intel realsense tracking and depth,” Intel Cor- poration, Tech. Rep., 2019, whitepaper, Revision 001. [Online]. Available: https://www.intelrealsense.com/wp-content/uploads/2019/11/Intel_ RealSense_Tracking_and_Depth_Whitepaper_rev001.pdf [55] L. Hardkernel Co., “Odroid–n2+ technical specifications,” Hardkernel, Tech. Rep., 2021. [Online]. Available: https://www.hardkernel.com/shop/ odroid-n2-with-4gbyte-ram-2/ [56] R. Automation, “Rb-kairos+ autonomous mobile manipulator: Technical datasheet,” Robotnik, Tech. Rep., 2024, includes full specifications of omnidirectional base, UR arm options, sensors, autonomy, and software architecture. [Online]. Available: https://robotnik.eu/wp-content/uploads/2024/ 07/Robotnik_Datasheet_RB-KAIROS-10e_2024_EN.pdf 91