Postoperative recovery is a crucial phase in ensuring successful rehabilitation. However, many healthcare facilities face challenges due to the limited availability of medical personnel, making routine patient monitoring difficult. This limitation can delay the early detection of complications and reduce overall recovery effectiveness. To address this issue, this study proposes a non-invasive, radar-based system for remote postoperative patient monitoring. The proposed system utilizes the IWR6843AOP radar to generate 3D point cloud data, spatially representing patient movements. This approach enables continuous monitoring without compromising patient privacy, allowing healthcare providers to offer more efficient care. The collected data undergoes preprocessing, including normalization, labeling, and dataset splitting, before being classified using deep learning models such as 3D CNN, 3D CNN+LSTM, 3D CNN+Bi-LSTM, PointNet, PointNet++, and RNN. The dataset consists of six activity categories: empty space, sitting, standing, walking, running, and squatting, recorded at a frame frequency of 18.18 Hz. Experimental results show that the 3D CNN combined with Bi-LSTM achieves the highest accuracy of 90%, surpassing models like PointNet and RNN. These findings indicate that a radar-based and deep learning-driven approach offers an accurate, efficient, and non-intrusive solution for postoperative monitoring, reducing the need for direct medical supervision. This technology has significant potential for broader healthcare applications, contributing to more advanced, accessible, and technology-driven patient monitoring systems. By integrating artificial intelligence and radar sensing, this research paves the way for innovative solutions in modern healthcare, ensuring better postoperative outcomes while optimizing medical resources.