The augmented reality of healthcare

The augmented reality of healthcare

Augmented Reality (AR) is about integrating digital information with the real world. Rob Walker, writer on the New York Times, explains it as taking the concept Virtual Reality, and turning it inside out:

”Instead of plunging us into a completely digital environment, augmented reality means placing digital things into the regular world. Those things might be bits of information or renderings of imaginary objects. And they, of course, aren’t really in the real world at all – they just appear to be there if you filter your gaze through the proper screen.”

Similar to VR, there are certainly large future possibilites for the technology in the healthcare sector, both for patients and healthcare staff.

Visual impairment

For people with visual impairment, AR could be life changing. OrCam has developed a device that can be attached to any glasses, consisting of a camera and sensors. The camera continuously scans the user’s field of view and then communicates, via a little earpiece, what it sees. OrCam also uses machine learning, a kind of Artifical Intellicence (AI), to enable the device to recognize and remember objects, such as people and products. Thus, supporting the user in better interacting with the environment.

Yonathan Wexer, head of research and development at OrCam, describes how it works in an article:

”Move your finger along a phone bill, and the device will read the lines letting you figure out who it is from and the amount due”

“The device will tell you when your friends is approaching you. It takes about ten seconds to teach the device to recognize a person, all it takes is having that person look at you and then stating their name”

The goal is that the device will have more complex functions such as remembering places the user has visited earlier or identifying colours.

Intel also conducts research regarding using AR as a part of augmenting our senses. As of now, they have a prototype in development that can help people with visual impairment, but also people who are completely blind, to get a better idea of their surroundings. The sensor system, called “environmental sensing system”, is integrated in clothes and uses a special 3D-camera technology and sensors.

The technology consists of three lenses: a normal camera lens, an infrared camera and an infrared laser projector. The infrared parts let the device sense distance between objects and separate objects from each other and the background, which makes it easier for the device to recognize objects, facial expressions and gestures.

Haptic signals, a soft temporary pressure, are sent to the user as a record of changes in the environment. The intensity of the signals are proportionate to how close the user is to an object; an object up close is marked with a more distinct vibration while the intensity of the vibration reduces on longer distances. Darryl Adams, project leader at Intel, has a visual impairment and has tested the sensor system. In an article, he says:

”For me, there is tremendous value in the ability to recognize when change occurs in my periphery. If I am standing still and I feel a vibration, I am instantly able to turn the general direction to see what has changed. This would typically be somebody approaching me, so in this case I can greet them, or at least acknowledge they are there”

Neuroscientist Stephen Hicks at Oxford University is developing smart glasses that uses AR and three dimensional cameras to increase depth perception and thus help users to see better. The technology senses structures and positions of objects that are located nearby the user. The information is then processed to highlight what is relevant and block out what is not. Hicks explains how it works:

“When you go blind, you generally have some sight remaining – and using a combination of cameras and a see-through display, we’re able to enhance nearby objects to make them easier to see for obstacle avoidance and also facial recognition”

“We turn [the image] into a high-contrast cartoon that we then present on the inside of a see-through pair of glasses. We can then add the person’s normal vision to the enhanced view, and allow the person to use their remaining sight as they generally would do to see the world in a better day”

Parkinson’s disease

Parkinson’s disease is a neurological disease characterized by motor symptoms such as tremors, rigidity, slowness of movement and impaired balance. The disease is chronic and as of now, there is no cure. Parkinson’s is progressive and will with time increasingly affect the motor function, where the gait is worsening and the steps are getting smaller and slower. It can also be difficult to start walking after being still.

In a study, patients with Parkinson’s disease got to wear a pair of glasses with a micro display (similar to Google Glass) generating a virtual checkered-pattern floor. The virtual signal is dynamically adjusted to the patient’s own movements which gives it the same feeling as walking on a real floor. The study showed and improvement of gait among patients using the glasses and patients also felt a lasting improvement of their gait thanks to the training in AR. The glasses are a simple training tool to improve motor function among patients with Parkinson’s disease.

Autism

Autism affects the way a person takes in, processes and interprets information, which may lead to impaired social interactions and difficulties in understanding other people’s thoughts, feelings, and needs. It can be hard for children with autism to identify facial expressions and feelings, which might obstruct social interactions and the development of, and to sustain, relationships. The development of these abilities requires intensive training and in some countries, training can be very expensive and hard to acquire.

Researchers at Stanford University conducted a project called Autism Glass, where Google Glass is used to help autistic children recognize and identify feelings. The software uses machine learning to automate recognition of facial expressions and, via Google Glass, give signals in real time. The device scans and classifies feelings through the camera on the glasses and the result is then sent to the users on the micro display.

The goal with the project was not that the children continuously would use the tool, but that it would work as support in the learning of facial expressions and feelings. In the second phase of the project, children got to play a game where they used the tool in the search of people with specific feelings or facial expressions.

Surgery

Doctors at Institute of Cardiology in Warszawa, Poland, have used Google Glass during a surgery of a chronically blocked coronary artery. The surgery, called PCI, is complex and not always successful. It can sometimes be difficult for the doctor to actually identify the coronary artery, which limits their ability to execute the surgery.

There are usually three-dimensional pictures showing the actual coronary artery coloured in with a contrasting dye to visualize it better. During the surgery, similar images was instead displayed directly in the doctor’s field of view on the micro display on Google Glass, which made it easier to visualize the artery and guide the wire to the blocked area. The doctor could also, via Google Glass, use a zoom function to twist and move around the pictures and also communicate with the device through voice recognition, which eased the procedure.

”This case provides of concept that wearable devices can improve operator comfort and procedure efficiency in interventional cardiology”

Dr. Maksymilian P. Opolski

Do you want to know more? Subscribe for monthly knowledge and news about digital health.

Related Posts

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>