The era of accepting a large scar on your body after surgery is long gone. Ever since the introduction of minimally invasive procedures, they’ve become a prevailing norm in current medical practices. Since they involve vascular structures, these procedures are also known as endovascular procedures.
However, due to a lack of high-resolution sensory feedback, these procedures such as stenting or angioplasty face plenty of challenges. A condition where the plaque builds-up in the blood vessels blocks the blood flow is known as atherosclerotic stenosis, which can cause severe complications and/or ischemic stroke. To correct the stenosis, a stent – a metal wire mesh, is deployed via the blood vessels in these endovascular procedures.
The job of the stent begins once it has been fully deployed into the vessel; it’ll widen the blocked blood vessel and restore the blood flow. However, while performing this procedure, surgeons find it difficult to confirm if the stent has been fully deployed or not due to the lack of robust sensory feedback and a direct line of sight. And needless to say, if it has been improperly deployed into the blood vessel, or if it were to suffer a crimp, clot embolism, restenosis, or thrombosis are most likely to occur.
There are medical imaging techniques currently being used today such as fluoroscopy and x-ray, that helps the surgeons determine the state of the deployed stent, but they both expose patients to high amounts of radiation along with more limitations. While fluoroscopy has difficulties in detecting crimps or compressions in the centre of the stent, x-ray imaging provides only a certain angle of the stent and does not allow surgeons to visualize it 100% correctly.
Thus, Mengya Xu, Lalithkumar Seenivasan, Leonard Leong Litt Yeo and Hongliang Ren have recently published a paper, “Stent Deployment Detection Using Radio Frequency‐Based Sensor and Convolutional Neural Networks”, that aims to overcome the limitations posed by techniques to determine the deployed stent’s state.
Their team proposes the usage of a 3D radio frequency (RF)-based imaging sensor along with StentNet – a deep neural network (DNN), that’ll obtain the required sensory feedback. While the sensor is not only used to detect the deployed stent’s state, but also construct a 3D mapping of the radiated space, the DNN network is used to classify the state detected by the sensor.
The changes found by the DNN in the reflected signals obtained from the sensor will allow the network to classify the state. Although during their experiment they used a cardboard box to occlude the sensor’s direct line of sight, they replace the same with a slice of pork or chicken to be able to simulate human tissues, and they’ve achieved an overall 90% accuracy. Truly these results found by these phenomenal team will hopefully serve as a stepping stone to further advancements in the medical science field!