TY - JOUR
T1 - Dense recurrent neural networks for accelerated mri
T2 - History-cognizant unrolling of optimization algorithms
AU - Hosseini, Seyed Amir Hossein
AU - Yaman, Burhaneddin
AU - Moeller, Steen
AU - Hong, Mingyi
AU - Akcakaya, Mehmet
N1 - Publisher Copyright:
© 2007-2012 IEEE.
PY - 2020/10
Y1 - 2020/10
N2 - Inverse problems for accelerated MRI typically incorporate domain-specific knowledge about the forward encoding operator in a regularized reconstruction framework. Recently physics-driven deep learning (DL) methods have been proposed to use neural networks for data-driven regularization. These methods unroll iterative optimization algorithms to solve the inverse problem objective function, by alternating between domain-specific data consistency and data-driven regularization via neural networks. The whole unrolled network is then trained end-To-end to learn the parameters of the network. Due to simplicity of data consistency updates with gradient descent steps, proximal gradient descent (PGD) is a common approach to unroll physics-driven DL reconstruction methods. However, PGD methods have slow convergence rates, necessitating a higher number of unrolled iterations, leading to memory issues in training and slower reconstruction times in testing. Inspired by efficient variants of PGD methods that use a history of the previous iterates, in this article, we propose a history-cognizant unrolling of the optimization algorithm with dense connections across iterations for improved performance. In our approach, the gradient descent steps are calculated at a trainable combination of the outputs of all the previous regularization units. We also apply this idea to unrolling variable splitting methods with quadratic relaxation. Our results in reconstruction of the fastMRI knee dataset show that the proposed history-cognizant approach reduces residual aliasing artifacts compared to its conventional unrolled counterpart without requiring extra computational power or increasing reconstruction time.
AB - Inverse problems for accelerated MRI typically incorporate domain-specific knowledge about the forward encoding operator in a regularized reconstruction framework. Recently physics-driven deep learning (DL) methods have been proposed to use neural networks for data-driven regularization. These methods unroll iterative optimization algorithms to solve the inverse problem objective function, by alternating between domain-specific data consistency and data-driven regularization via neural networks. The whole unrolled network is then trained end-To-end to learn the parameters of the network. Due to simplicity of data consistency updates with gradient descent steps, proximal gradient descent (PGD) is a common approach to unroll physics-driven DL reconstruction methods. However, PGD methods have slow convergence rates, necessitating a higher number of unrolled iterations, leading to memory issues in training and slower reconstruction times in testing. Inspired by efficient variants of PGD methods that use a history of the previous iterates, in this article, we propose a history-cognizant unrolling of the optimization algorithm with dense connections across iterations for improved performance. In our approach, the gradient descent steps are calculated at a trainable combination of the outputs of all the previous regularization units. We also apply this idea to unrolling variable splitting methods with quadratic relaxation. Our results in reconstruction of the fastMRI knee dataset show that the proposed history-cognizant approach reduces residual aliasing artifacts compared to its conventional unrolled counterpart without requiring extra computational power or increasing reconstruction time.
KW - Inverse problems
KW - MRI reconstruction
KW - neural networks
KW - physics-driven deep learning
KW - recurrent neural networks
KW - unrolled optimization algorithms
UR - http://www.scopus.com/inward/record.url?scp=85087308284&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85087308284&partnerID=8YFLogxK
U2 - 10.1109/JSTSP.2020.3003170
DO - 10.1109/JSTSP.2020.3003170
M3 - Article
AN - SCOPUS:85087308284
SN - 1932-4553
VL - 14
SP - 1280
EP - 1291
JO - IEEE Journal on Selected Topics in Signal Processing
JF - IEEE Journal on Selected Topics in Signal Processing
IS - 6
M1 - 9119770
ER -