Accurate forecasting of agricultural yield is critical for food security and sustainable resource management, yet potato (Solanum tuberosum L.) poses unique challenges due to nonlinear dependencies among spatial descriptors, structural indices, and regional heterogeneity. This study addresses these limitations by systematically evaluating advanced machine learning (ML) models for potato yield prediction. A multi-regional dataset incorporating Area, Perimeter, Hectares, IDP, IDD, categorical classifications, and yield values was preprocessed through imputation, encoding, normalization, and correlation-based redundancy reduction. Seven ML architectures were implemented: quantum temporal models (QTMs), evolutionary temporal networks (ETNs), dual-path recurrent neural networks (DPRNNs), variable attention span transformers (VASTs), meta-learning for time series forecasting (MLTSF), continuous-time sequence models (CTSMs), and spatio-temporal graph convolutional networks (STGCNs). Performance was assessed using mean squared error (MSE), root mean squared error (RMSE), mean absolute error (MAE), coefficient of determination ($${R^2}$$), relative RMSE (RRMSE), Nash–Sutcliffe efficiency (NSE), Willmott Index (WI), and fitted time. The results showed that QTMs achieved near-perfect accuracy (MSE = $${1.13 \times 10^{-5}}$$, $${R^2 = 0.9998}$$, NSE = 0.9987, WI = 0.9979) with minimal runtime (0.003 s), while ETNs also performed strongly ($${R^2 = 0.9960}$$). DPRNNs offered good accuracy ($${R^2 = 0.9725}$$) but at a higher computational cost, whereas VASTs and STGCNs delivered moderate performance, and MLTSF and CTSMs underperformed. Overall, quantum-inspired and evolutionary temporal models proved most effective, combining predictive precision with efficiency, and offer strong potential for integration into intelligent decision-support systems to advance sustainable agricultural planning.
Full publication URL