Abstract
<jats:p>In the current paper, we will introduce a new FPGA-based multi-objective deep neural network model that can be used to optimize mmWave multiple-input multiple-output (MIMO) antennas in real time. In contrast to traditional surrogate models, which forecast only one of the performance parameters, the proposed approach simultaneously learns the complex nonlinear correlations between reflection coefficient (S11) and mutual coupling (S12) and realized gain and envelope correlation coefficient (ECC) on a set of multi-output DNNs trained on full-wave simulation data of the IEEE_LAT_AM-T-9776 repository. The surrogate model is implemented on a Xilinx Zynq UltraScale+ FPGA with 16-bit fixed-point quantization and the inference time is reduced to less than 1ms, which allows the adaptation of the design during the iterative process to be suitable to real-time applications. Detailed performance tables show that the framework performs better than representative state-of-the-art methods in both electromagnetic performance and computational efficiency. The design proposed attains S11 of -35.2dB, isolation of more than -27dB, gain of 8.7dBi and ECC of less than 0.001 whereas the time taken to evaluate the design is three orders of magnitude less than optimization by full-wave based methods. Such findings demonstrate the feasibility of including hardware-accelerated surrogate learning in the design of the next generations of antenna synthesis, which makes a valuable contribution towards intelligent antenna design of 5G and future wireless systems.</jats:p>