Skip to main content

23 September 2019

Smart Surgery: infection control and 3D environments 

Researchers at the School of Biomedical Engineering & Imaging Sciences are exploring the many clinical applications of 3D cameras.

New Scientist Live 2019

Consider the capabilities of the 3D camera – a small appliance that can detect distance to surfaces to build up a 3D model of the world.  

With this technology, we now have the ability not only to know where people are within the environment but also specifically where their hands are touching. 

Why the hands? Researchers at the School of Biomedical Engineering & Imaging Sciences are exploring the capabilities of 3D cameras for monitoring in the hospital with infection control being a potential application. 

It is an intriguing concept. As soon as someone, say a hospital patient, interacts with a surface within any environment, this will be recorded. As more data is captured and more people are tracked, we will see what people are touching, how they are moving around, and follow their touch-filled journey. 

“Basically we will have this heat map of where people have been interacting,” Dr Phil Noonan, a researcher in the School said.

Since the time that someone touched a surface can be recorded, an infection control pathway could be followed.  

“If someone came into the hospital to visit a friend or relative and they didn’t know they had something like measles, we would be able to go back and see how and where they interact and know how best to start the process of contacting other people who were also in the same area," Dr Noonan said. 

The need for this, is perhaps relatively obvious: hospital staff can have a much better understanding of infection control in their wards: where is the most bacteria, from where did it originate and what is the plan for action –  should a certain area be deep cleaned, for instance. 

This is just one clinical application of the 3D camera and is a project that is still in development. But it does unleash a plethora of other healthcare applications of the 3D camera. 

Track & Flow 

The same way that we can recognise and detect objects in an environment, this is also what deep learning is able to parse.  

Once you know where humans are in an image then you can start looking at what’s happening at the image and pixel level. Thanks to deep learning, researchers can detect very small changes in writing of individual pixels, and they can, for instance, infer a breathing signal out of it. They use algorithms like optical flow which track pixels as they move around and use that information as well.  

Essentially, with 3D cameras you can simply monitor how the 3D shape of the person is changing over time to be able to detect breathing rates and other useful factors that can be used to measure the patients’ wellbeing.  

The cameras can monitor larger objects such as the surface of the chest, or the head. A trained AI model is able to detect locations of body parts, as shown by a stick figure skeleton. On the screen we also see a mix of blurry colours which indicate the direction and speed of motion. 

“With this information we can tease out breathing signals, detect apnea events (when a person, often while being asleep, temporarily stops breathing for a while), monitor when and how people move around, when we touch objects, and then go onto touch other objects,” Dr Noonan said.  

“For example, if you have a 24-hour breathing system, you could detect when patients are not breathing. Currently, even in intensive care unit nurses only manually check the breathing every fifteen minutes by visually looking at the patient’s chest.”   

“So if a patient has an apnea event when no one is around to see it this can be completely missed and of course nurses cannot stay by all beds and watch each patient breathing.”   

“This will hopefully just be another tool to monitor the wellbeing and health of the patient,” Dr Noonan said. Dr Noonan has also worked on some projects with head-tracking for when doing brain imaging. 

Usually, during a brain scan the patient needs to stay still for a long time but for those who might have difficulty doing so, say the young, elderly or those who have certain conditions that prevent them from staying still for a long time, then the scanned image will of course be blurry. If it is known how people are moving within that image, the motion can be corrected afterwards.

Dr Noonan’s work is connected with EpiNav where his optical tracking technology analyses gait for epilepsy patients who have seizures. At the moment, they are recorded for two weeks at a time in the ward and if they have a seizure a clinician will go through and identify what part of the body moved and when.   

“These can be quite biased, subjective and often takes a lot of time for the clinician to do it, so the idea would be can we automate this or at least have a record and say ok this is an unbiased representation of the motion that we’re getting,” Dr Noonan said.   

Another application that is currently in development is for movement disorders. Dr Noonan says the technology can be applied to monitor and develop a model that can recognise and distinguish multiple types of upper body movement disorders such as fine tremors. 

This would involve having multiple cameras around the patient who will do certain tasks while being monitored using the tracking and train an actional recognition model to augment to try and see whether it could progress  further, ultimately answering whether this could be a good way to assist the diagnosis or care pathway for monitoring these patients.   

The future is seemingly limitless, especially considering all the clinical applications from something like a 3D camera, that many can commonly associate with gaming.  

“Modern surgery is a high-fettered technology, people are always striving to make the process safer,” Dr Noonan said.

“There are many ways to do this and this technology is one part of an overall scheme to be able to monitor, record, give feedback for the surgeons.”