The integration of neural networks into FPGA-based systems has revolutionized embedded computing by offering high performance, energy efficiency, and reconfigurability This paper introduces a novel optimization framework integrating Principal Component Analysis (PCA) to reduce the complexity of input data while preserving essential features for accurate neural network processing. By applying PCA for dimensionality reduction, the computational burden on the FPGA is minimized, enabling more efficient utilization of hardware resources. Combined with hardware-aware optimizations, such as quantization and parallel processing, the proposed approach achieves superior performance in terms of energy efficiency, latency, and resource utilization. Simulation results demonstrate that the PCA-enhanced Liquid Neural Network (LNN) deployment significantly outperforms traditional methods, making it ideal for edge intelligence and other resource-constrained environments. This work emphasizes the synergy of PCA and FPGA optimizations for scalable, high-performance embedded systems. A comparison study using simulation results between cascaded feed forward neural network (CFFNN), deep neural network (DNN) and liquid neural network (LNN) has been encountered here for the embedded system to show the efficacy of PCA based LNN. It has been shown from case studies that the average F1score is 98% in case of proposed methodology and accuracy is also 98.3% for high clock value.