Our work proposes a prototype robotic system that can be applied for rapid and efficient patient registration, continuous monitoring of critical vital signs, and improving patient engagement using a based chatbot. We used sensors to measure the Heart or pulse rate, SpO2, and temperature using a Raspberry pi. To facilitate patient registration, a touch-screen display is connected to the Raspberry Pi. The chatbot, developed with Python, responds to pre-defined questions and provides assistance to patients with general inquiries. Our Initial results show that the patient can register themselves independently without the assistance of a nurse, check his vital signs and get queries cleared by the chatbot. The patients experience and the overall hospital operational efficiency is improved by our robot.
Introduction
In the recent study of improving institutional quality, automation techniques were invoked by the researchers in order to eradicate inaccuracies in the data as well as greatly reducing time consumption in transferring the information of vital sign monitoring devices to electronic health records. The preliminary results that were extracted depicted not only greater efficiency and effectiveness in conducting the nursing tasks but at the same time assured universal user satisfaction as well. By automating the administrative duties, both employee and patient satisfaction ratings improved and allowed nursing staff to attend to patients more. There is a considerable digital revolution taking place in the healthcare industry, but some systems still depend on manual procedures for patient registration and monitoring vital signs. These traditional techniques are prone to human error and largely effects the patient's overall care and hospital efficiency. Social robots are being incorporated in healthcare industries, which provide important assistance to youngsters, the elderly, and patients with chronic illnesses, thereby improving patient care and engagement.
Technology has always played an important role in agricultural, industrial and personal developments for ages. This also applied to healthcare industries which introduced us to medical gadgets and procedures, enhanced patient care and decreasing hospital wait times and expenses. Nonetheless, there are still gaps in the implementation of this technology for the overall patient experience. The existing systems fail to deliver a smoother experience, and the incorporation of several technologies into a unified system is restricted yet very crucial.
The aim of this study is to showcase progress in healthcare technology through the creation of a unified robotic system that automates patient registration, monitors vital signs, and offers interactive chatbot assistance. The proposed system decreases the manual workload of health personnel, enhances self-management in patients, and creates a more organized and integrated environment in healthcare.
In this paper, Section 2 presents a broad literature survey, reviewing existing systems and technologies about healthcare robotics, patient monitoring, and chatbot integration. In Section 3, we present the overview and methodology of the project in terms of design, components, and workflow of the robot we have developed which integrates all these features into one unified system. Then in the next section we provide with the experimental setup followed by the results that we have generated by testing the sensor data and importing them to the display to visualize and the responses given by the AI chatbot. In the final conclusion, we mention out final discussions and future research potential of the robot.
WHAT IF?..
THERE IS A WAY TO PROVIDE….
Non-contact Service
Patient Interaction
Easy Handling
Timely Medication
Assist Nurses
WE PRESENT OUR PRODUCT EMMA!
A Multifunctional Autonomous robot capable of….
Methodology
A. System Design
The robot's head features two 2.4-inch TFT displays that run animated eye movements, giving a more human-like appearance. With Wi-Fi, a transceiver module, and a GSM module, it provides reliable internet access and emergency communication features via SMS.
The robot is equipped with two separate robotic arms, each with four degrees of freedom (DOF). The right arm will be programmed for room sanitation, which is equipped with a gripper mechanism integrated with a spray nozzle that sprays the sanitization liquid. The second arm on the left features a parallel jaw gripper which is used for object manipulation, griping, holding or interacting with different objects. Also, this arm is programmed to perform necessary gestures for greetings or any kind of non-verbal interaction with the patients.
The body of the robot has an interactive display where the patient can self-register, allowing the doctor to access the patient's details directly. The display also has a chatbot and emergency call feature equipped with micro-phone and speakers for patient interaction. A sensor panel equipped with SpO2, temperature and heart rate sensors measure vital signs, which will be monitored and displayed in real-time. A processor within the body handles the sensor data processing and managing the interactive display functions. There is a keyboard below the display for allowing the patient to manually enter the data during registration. A 3D depth camera captures the patient photos for registration purposes and helps with the navigation of the robot for mobility and operational efficiency.
The base of the robot functions as the main chassis which is equipped with four high-torque motors and wheels for efficient mobility. A LiDAR sensor is mounted on top of the base for navigation and environmental mapping within the hospital. This will allow the robot to move autonomously, avoid obstacles and ensure precise mobility. Figure 1 depicts the 3d CAD Model for our proposed design of the robotic system.
Fig. 1. The CAD model of the proposed prototype.
B. Key Components
Our robot combines all the elements of a health system, and each has an important role that combines to ensure the efficacy of the monitoring of the patients, patient registration, and communication process.
The MAX30100 pulse oximeter sensor measures such vital parameters as the heart rate and oxygen levels in blood. The MLX90614 infrared temperature sensor monitors body temperature in infrared technology to ensure it is within the normal range between 36 and 37 ºC . This collected data, is directly send to the raspberry pi, the main processing unit.
The Raspberry Pi High-Quality Camera takes images of patients for registration, which is followed by processing them at the Pi. This camera has a lot of support from the Raspberry Pi under it, for the proper capturing and processing of images.
The chatbot functionality is implemented using Google Text-to-Speech and Speech Recognition libraires in Python. A patient's voice inputs are recorded using the USB microphone and passed for further processing. The response is given back to the patient via a Bluetooth speaker during a conversation between the patient and the chatbot.
Fig. 2. The block diagram of the system.
The emergency call and alert feature is implemented using the SIM 800L GPRS GSM module, which is capable of sending text messages and calling the doctor, whenever the button is pressed in emergency situations. The integration of the parts of a holistic health system is receiving hardware devices, communication protocols, and the software to run the systems such that all data are managed accurately. The block diagram presented in figure 2 shows the integration of various components used to create the healthcare robot. Raspberry Pi is the core of the whole system acting as the central processing unit which interfaces with multiple sensors to perform various functions. The vital signs are monitored using MAX3010 Heart Rate and Pulse Oximeter, which measures both heart rate and blood oxygen levels, and the MLX90614ESF Infrared Temperature Sensor, which provides non-contact body temperature readings. For autonomous navigation and mapping, the system proposes the utilization of LiDAR and Intel® RealSense Depth Camera for measuring distances accurately and creating a 3D map of the surroundings, further used to detect obstacles and navigate through the environment. A GSM module carried out communication and the 7-inch TF Pi Display allows user interaction. The chatbot is integrated with microphone and speakers for enabling voice-based patient inter-action. All the data is sent to a cloud database for analysis and storage of patient data through mobile devices.
C. User Interface
After the collection of patient's vital signs data, it is transmitted to the cloud for real-time monitoring and analysis. The results are presented to the doctors via a mobile application, particularly designed for the hospital management, nurses, and doctors. For real time data storage, we have utilized Firebase in our prototype, a cloud-based platform by Google, which offers a wide array of toolchain resources for building and managing applications in real time.
Firebase offers services, such as Realtime Database, Authentication, Cloud Storage, and Hosting, thus enabling flawless storage and synchronization of data. The vital signs data is stored and updated in real-time within Firebase’s Database. To ensure a secure communication with the cloud, the database URL and a secret key or authentication key is used to access the database. On the other hand, the data transmission occurs through HTTP-based protocols. Once updated, the data is relayed to the mobile application, allowing doctors to conduct a preliminary assessment of the patient's current health beforehand, while they are waiting in the lobby, thus saving a lot of time and effort. Figure 3 represents the user interface of our developed app.
Fig. 3. The user interface of the app.
Furthermore, we have the user interface for the interactive display, placed on the body of the robot. The interface is developed in python using tkinter, a versatile python library, which allows the developer to create attractive and appealing graphical user interface (GUI). the main page contains six buttons or keys which are 'Automatic registration', 'Check your vitals', 'Play & Watch', 'Emergency Call Service', 'Chatbot', Medicine dispenser'. Figure 4 depicts the main screen of the
interactive display of our robot.
Fig. 4. The user interface of the interactive display.
The following points describe the function of each button respectively: -
· Automatic registration: Quickly registers patient details into the system via picamera.
· Check your vitals: Measures and displays health parameters such as heart rate, temperature, sp02 etc.
· Play & Watch: Provides multimedia content for entertainment or health education.
· Emergency Call Service: Allows instant alert to healthcare professionals in case of an emergency.
· Chatbot: Offers an interactive companion feature for inquiries, support & engagement.
· Medicine dispenser: Automatically dispenses prescribed medications at scheduled times.
D. Navigation using ROS2
We have simulated our robot using ROS2, Gazebo and Nav2 to demonstrate its navigation capabilities. ROS2 (Robot Operating System 2) is an open-source framework used for robot control, communication and navigation. It allows integration software plugins and hardware sensors for real-time robot control. Gazebo is a simulation tool that can replicate real-world physics and sensor data in a virtual environment. Nav2 is package is ROS2 for autonomous navigation, path planning and obstacle avoidance. Rviz is a tool to visualize and monitor robot’s sensor data, mapping and movement in real-time. Here we have written an xml code to visualize our robot in 3D to spawn it in the Gazebo Environment, as shown in Figure 5.
Fig. 5. Rviz model of the robot simulated in Gazebo.
Using gazebo for the virtual simulation, we spawned a virtual hospital environment with walls, tables and other dynamic obstacles to replicate a real-world condition that could be present in a hospital. The robot is also equipped with a LiDAR for distance measurement and IMU for motion tracking. The bellow figure 6 represents the RQT graph, describing the interaction between the different ROS2 nodes in this setup. The /robot_state_publisher node publishes the current state of the robot and transformations to the /tf and /tf_static topics. These transformations are consumed by transform_listener_impl nodes, which track the robot’s pose. The /cartographer_node handles SLAM, generating submaps and feeding data to the /cartographer_occupancy_grid_node, which creates the occupancy grid map for navigation. This graph outlines the essential communication flow for mapping and localization in the simulation.
Fig. 6. RQT graph of the nodes in the simulation.
To perform navigation in the hospital environment, we have implemented the Cartographer SLAM in Gazebo. This simulation was visualized in Rviz, where both the mapping process and robot localization were monitored as displayed in figure 7. Cartographer SLAM works by performing simultaneous mapping and localization, where the robot created a map of the unknown environment while determining its location in the map. The position of the robot is estimated using a combination of sensor inputs. The algorithm updates the map explore and navigate through the environment. The system iteratively refines both the map and localization by minimizing the error between expected sensor measurements and actual observations using a method called scan matching. Once the map is built and the robot is localized, Cartographer SLAM allows the robot to plan a path to its target destination while avoiding obstacles. The occupancy grid created by the LIDAR sensor provides information about free and occupied spaces, which is used by the path planner.
Fig. 7. Cartography SLAM Used for Mapping Simulation in Rviz.
The cost function for path planning can be defined as:
Ctotal=Cd+λCo
Where, Cd is the cost associated with the distance to the goal, Co is the cost associated with proximity to obstacles, and λ is a weight that balances the two costs. This cost function equation will help determine the cost map of the navigation in ROS [14]. A cost map in ROS is a 2D grid representation of the environment that assigns costs to cells based on obstacles, making it useful for path planning and navigation.
Experimental Setup
We have developed a Phase 1 prototype of our proposed model that incorporates all the features previously discussed. Pi camera for patient detection and automatic registration, a microphone for chatbot input, and a 7-inch TFT Raspberry Pi display for displaying an interactive user interface, incorporating all the features. We utilized the MAX30100 pulse oximeter sensor for measuring SpO2 and heart rate, and the MLX90614 IR non-contact sensor for temperature measurement. The sensors are powered and run by the pi directly with the camera and microphone input going to the raspberry pi inside the white box fixated on the chest. We have designed and 3d printed our own models for the head, which houses two 2.4 TFT shield displays for the eyes, and two arms with four degrees of freedom (DOF). For the movement of the arms, we have used a combination of MG996R high torque servo motors and MG90S servo motors. Furthermore, we implemented the SIM800L GSM module for emergency calling and sending alert messages. We have crafted the base of our robot using wooden ply in a rectangular boxlike shape. This base house a 12V 2Amps battery to run the motors attached below the base, other circuits and a motor driver module. The figure 8 depicts our own crafted model.
Fig. 8. Designed Phase 1 prototype of our proposed model.
Results and Disscussion
This section further delves into a discussion about results we have achieved while testing our model in Realtime. These results prove the model’s capability to detect and service an individual in Realtime. Additionally, we will discuss about the achieved simulation results for navigation, performed in gazebo to show our models proficiency in autonomous navigation in an environment present with obstacles.
A. Sensor Data Collection
The real-time vitals data is gathered by the sensors and then sent to the Raspberry Pi for Processing. The Pi analyses the data and displays the sensor readings on the LCD screen in a graphical format, allowing real-time monitoring of the changes directly on the display. This will help with the on-time data analysis of the vital signs of the patient. Figures 9 and 10 represent the senor data collected from an individual while testing. The x axis represents time for collection in seconds while the y axis represents the values from the sensor data, i.e., SpO2 and pulse rate.
Fig. 9. Real-Time Data Visualization for SpO2.
Fig. 10. Real-Time Data Visualization for pulse rate.
B. Graphical Display of Results
The user interface for the interactive display is developed in python using the tkinter library. The main page (Figure 4) features buttons that correspond to each of the unique service functions. The Automatic registration button takes the patient to the registration page where their personal details are collected and compiled into a single report. Figure 11 depicts the patient registration process, while figure 12 shows the profile created for them after the completion of the registration process. During the registration process, The Pi camera captures the patient’s photo, and basic information is collected through prompts which is then stored in the hospital database. We integrated a SIM800 GSM module to enable the emergency call feature, allowing patients to directly contact or alert a doctor in case of emergencies or medical queries. This functionality ensures immediate communication, enhancing patient safety and response time during critical situations. The emergency call page is shown in the below figure 13. There are two buttons in this page for both calling or send sms alert to a healthcare professional , the ‘Call Doctor’ and the ‘Text Doctor’.
Fig. 11. The Patient Registration window.
Fig. 12. Created profile page.
Fig. 13. Emergency alert and calling window.
We developed a Python-based chatbot with pre-programmed questions and responses. A microphone connected to the Raspberry Pi is used for voice input, which triggers the model to match the input to the pre written questions. The corresponding response is then displayed on the screen for the patient to interact and get real-time feedback. For Example, here we asked the robot, “Where is the General Medicine Block?” for which it replied the below figure14(a).
Fig. 14(a)
Fig. 14(b)
Fig. 14(c)
C. ROS Simulation results
After simulating the robot in Gazebo using Nav2 package, In the simulation, the robot was first set at an initial position as shown in figure 15, and it was given a path to follow using the 2D Goal Pose tool available in the Nav2 package in Rviz to reach the final position as shown in figure 16. The robot’s starting position was defined with the coordinates (x0,y0,z0), representing its position and orientation. As the robot moved in its specified path based on the LiDAR data, it adjusted its path. At the final position (xf,yf,zf), we recorded the position, orientation, linear and the angular velocities of the robot. The measurements were taken at regular intervals as it navigated the path to monitor the robot’s performance.
The robot followed a smooth and efficient navigation, avoiding collisions with obstacles and ensuring a timely arrival at the destination. We also observed the variations in the position and orientation such as within a period of 40 seconds, that indicated little, controlled movements consistent with precision robotics.
Fig. 15. Robot’s initial position in the simulation environment.
Fig. 16. Robot’s final position in the simulation environment.
Revised quaternion orientation excerpted from this section remained within a total change of 0.040 meters from Table I in position and 0.018 in quaternion orientation over this section. Similarly, both linear and angular velocities increased slightly, corresponding to slow and deliberate movement with a net change in linear velocity of 0.00040 m/s from Table II. These values reflect realistic scenarios in precision robotic systems in which fine positions and small changes in position are needed which include the use of robotic medical assistance or delicate environment activities.
The costmap generated in ROS as shown in figure 17, is stored using two key file formats: PGM and YAML. The PGM file (Portable Graymap) contains a pixel-based representation of the environment, where black pixels indicate obstacles or non-navigable areas, white pixels represent free spaces the robot can traverse, and gray pixels signify unknown or unmapped areas. Accompanying this is the YAML file, which provides essential metadata for interpreting the PGM file. This metadata includes the map's resolution (0.05 meters per pixel), the origin of the map relative to the robot's position (at coordinates [-6.95, -1.35, 0]), and thresholds that distinguish between occupied spaces (above 0.65) and free spaces (below 0.25) as tabulated in table III.
Together, these files offer a detailed representation of the robot's environment, enabling accurate navigation and path planning.
Fig. 17.Generated costmap from RViz.
Conclusion
This project successfully integrates several key components to enhance healthcare automation and patient management. The system combines real-time sensor data monitoring, automated patient interaction via a Python-based chatbot, and emergency communication through a GSM module into an integrated platform. This integration simplifies hospital operations, improving patient engagement. By leveraging real-time data visualization, automated communication, and efficient registration processes, the system demonstrates improved efficiency and accuracy in managing patient information and responding to urgent needs. Our Future Prototypes will make the robot autonomous by integrating LiDAR and Cameras for autonomous navigation. This will allow the robot to perform other activities like medicine delivery, patient assistance on the go and transportation with precision. The addition of speakers will add more to the natural interactive communication with the patients. The chatbot systems will be upgraded and trained with Large Language Models (LLM) to upgrade the ability of the robot to handle complex queries and offer more personalized interactions.