Official ESCRS | European Society of Cataract & Refractive Surgeons
Vienna 2018 Delegate Registration Programme Exhibition Virtual Exhibition Satellites 2018 Survey

 

escrs app advert

Surgical tool detection in cataract surgery

Search Title by author or title

Session Details

Session Title: Anterior Segment Imaging II

Session Date/Time: Tuesday 25/09/2018 | 08:00-10:30

Paper Time: 10:14

Venue: Room A4

First Author: : B.Cochener FRANCE

Co Author(s): :    H. Alhajj   M. Lamard   G. Quellec              

Abstract Details

Purpose:

The main objective of our research is to develop methods of real-time analysis of surgical videos for intraoperative cataract surgery by generating alerts and recommendations. This summary proposes an original method of detection of tools based on Artificial Intelligence (deep learning) which is the first step to achieve this goal.

Setting:

1- Service d'Ophtalmologie, CHRU Brest, Brest, F-29200 France 2- Univ Bretagne Occidentale, Brest, F-29200 France‚Ä® 3- Inserm, LaTIM UMR 1101, Brest, F-29200 France

Methods:

A video database was built in this work, that includes 50 cataract surgeries performed at Brest University Hospital. Each surgery was recorded in two videos: the microscope and the operating table. The merging of these two video streams will allow a better detection of the presence of tools in the eye. Our algorithm uses the two most powerful convolution deep neural networks of the moment: Inception-ResNetV2 and NASNet.

Results:

The 100 video feeds were independently annotated by two experts and a consensus is the subject of field truth. The database has been divided into two parts, one for training and one for testing. Training took 5 periods and 48 hours for each type of video. The area under the OCR curve was used to measure the performance of the algorithm. For the tools on the table, an average accuracy of 0.72 was obtained and 0.96 for the tools in the microscope videos.

Conclusions:

The results in the microscope flux are almost perfect. Those in the operating table are not as good: they are certainly due to the large number of tools on the table and their small sizes. New developments, using larger input images are being investigated. Nevertheless, they contain additional information and will allow an overall improvement of our tool. This opens the doors to the fine recognition of the surgical gesture and new applications such as the guide to learning, or even the substitution of the framing on surgical simulator.

Financial Disclosure:

-

Back to previous