Audio
Peter Ford - Control Bionics (part 1)
The founder of a blind-assistive technology company shares latest developments.
From Radio 2RPH Sydney, Ablequest is a series of 15 minute programs which examine developments in assistive technology and initiatives for people living with a wide range of disabilities.
Presented by Barbara Sullivan, Marni Roper and Elaine Wziontek.
This episode is Part 1 of an interview with Peter Ford from the company Control Bionics.
(PROGRAM ID)
Speaker 1 00:03
With information on the latest developments in assistive technology and initiatives, 2RPH in Sydney brings you Ablequest.
Speaker 2 00:16
you....
Speaker 1 00:19
Hello, I'm Marnie Roper. We have had the very good fortune to have Peter Ford, founder of Control Bionics, as a repeat guest on Ablequest since our very first programme in 2012. Over the years we've tracked the progress of the life-changing assistive technology he invented, which helps seriously disabled people communicate. This technology works by picking up minute electrical activity in the muscle, allowing the user to have some basic form of communication via a connected computer. It has progressed from the first generation called NeuroSwitch to NeuroNode and now NeuroStrip.
It is now six years since Peter visited us in the studio, and we welcome him back as he explains to Barbara Sullivan the extraordinary progress that they have made. Today we broadcast the first of a two-part interview. Hello Peter, it's terrific to have you back with us in the studio to our PH. It is 2012 I think, the first interview we had, when we talked about the very early iteration of NeuroSwitch as it was then... and the last interview was 2018, and a lot has happened since then.
Speaker 3 01:33
Thank you to everyone.
Speaker 1 01:34
So what I'd like to do first, if we could very briefly cover a bit of ground there on what the neural node technology is based on, then your main stages of development.
Speaker 3 01:45
When we began, we began with somebody who's totally locked in, can't move or speak, and we wanted to find a way for them to signal anything so that we could use that signal then to control things like text generation on a computer or a smartphone. So what we ended up looking for was electromyography signals, and those are the electrical signals which travel through a muscle when the brain tells that muscle to contract. And even if the muscle doesn't work properly, there'll be something somewhere in the 400 or so skeletal muscles we have in our body between our bones and our skin.
One of those muscles somewhere should be able to respond, and it may be raising an eyebrow, it may be twitching a finger, it may be twitching a toe, but if we can see that, we can put a sensor at the muscle which controls that movement and use that as a switch. So if they're doing nothing, it's on like a resting state. If they do anything and it peaks and makes a little twitch, then we can use that twitch as an on switch, and that on switch then lets us control a lot of assistive technology. And it's non-invasive, it sits on the skin, so the very early one was neuro switch and that was sort of a large piece of equipment. Size of a shoebox. And it was connected with wires. Lots of cables, yeah....
Speaker 1 03:01
Cables. In fact it's like...
Speaker 3 03:02
... into the wall for power that... by law you can't connect power directly to a client or a patient, so we actually had to have a thing called an optical isolator which changed light signals to make switches, so it was really complex, was very large - but it worked.
Speaker 1 03:15
And then you came out with neural node, which was... Size of a big...
Speaker 3 03:19
...diving watch.
Speaker 1 03:20
Right. And it worked through Bluetooth, didn't it?
Speaker 3 03:24
Yes, exactly. Yeah. And with Bluetooth, they could then talk to an iPhone or a tablet, an iPad, or a computer. And it was able then to send that on signal to any of those devices and control what's called assistive technology, or AAC, or AT. And what that does is it has a computer giving you a series of choices. And when it gets the choice you want, you just make an on signal, and it then puts that on the screen.
Speaker 1 03:49
It could be a symbol or a keyboard or...
Speaker 3 03:52
...picture of a keyboard. So let's say you wanted to type the letter A, computer would highlight the top line of the keyboard, Q-W-E-R-T-Y-U-I-O-P, and then it goes down, highlights a second row, and that's where A-S-D-F and so on is. And so once it gets to that row with your letter A, you make a tiny little signal. It might be raising your eyebrow. And so the computer then knows to stay on that line, then it'll go across the letters from left to right until it gets to the one you want, then you make another signal.
Now we can accelerate that by doing those letters in blocks. So instead of just letter A, it has A -S -D -F, for example. And also, just as you have on your iPhone or your Android, it has word prediction. So if you pick a certain letter, that device knows what your favorite words starting with that letter are, so it starts putting those on the screen and cycling through those. So you continually accelerate the choices.
Speaker 1 04:40
And you took that and you married it with other technology like iGaze Technology and created a Neuronode trilogy, I think, directly. So how does that work? Because people are familiar with iGaze Technology, and so how does this work?
Speaker 3 04:54
So what this does, normally with regular eye gaze technology, with the original technology, before we came into that area, if you looked anywhere on the screen, the cursor would move around following your eyes. There's a little thing called an eye tracker underneath the screen, and it follows where you're looking. It follows your eyes. So wherever you were looking, a little cursor traveled around the screen. So your eyes acted like a mouse. To make it choose a letter, say if you're looking at a letter A on a computer keyboard on the screen, displayed on the screen, you had to look at it. It's called dwelling or hovering - for anything from half a second to three seconds, it doesn't sound like much. But if you do that by writing a sentence or a tract, that gets very tiring. So in about 45 to 48 minutes of just doing that, a person generally gets so fatigued they stop using it.
So we had to find a way to get around that fatigue issue. So instead, we thought, if you just look at the letter and lights the letter up just by looking at it, then you make a signal with a neural node or neuro switch, and that chooses a letter. You've got the equivalent of moving a mouse with your eyes and making a click with something else, just a nerve signal, and suddenly you have an almost completely fatigue -free way of communicating. And we have people who are completely locked in who may write for eight hours or more nonstop using that without any fatigue.
Speaker 1 06:09
Incredible. So the underlying technology has been refined through the last couple of decades, but you've changed the way it sits on your skin or what it can actually do.
Speaker 3 06:20
The latest is called NeuroStrip, and if you remember what a stick of chewing gum looked like, it's smaller than a stick of chewing gum. If you don't remember that, it's smaller than a small band-aid, and it weighs half the amount of a sheet of typing paper, so it's very small, very light. And it does all the things the other devices did. And so once it's on your skin, you can basically forget you've got it there. But it's picking up all the data the other earlier models picked up, and it's transmitting it through Bluetooth into whatever device you want to control. And that can now be a phone, a tablet, a computer, a wheelchair, or a personal robot.
Speaker 1 06:57
Well that's interesting. I know that robotics is one of your great passions from the very start and so it's not surprising that you would have come to the point that you're starting to work with robots. And I think you've opened an office in Japan and I imagine that probably that's fast -tracking any development work you might be doing with robots.
Speaker 3 07:16
Yeah, we have some very good partners now in Japan, and there's a really good interest, first of all, in what we're doing for people who have severe disabilities. In Japan, there's the same level of motor in your own disease that most countries experience. Also, there's a high level of cerebral palsy at birth, so a lot of kitties are going to start benefiting from this, and one of the leading educators in CP in Japan has become an advocate for our technology, so that's very exciting for them as well as for us.
But also, we think of all the cultural groups around the planet, Japanese are more probably inclined to at least be amenable towards working with a robot or even having a personal robot, and we think that's the next big advent in biomechanics or in robotics and cybernetics, because a personal robot can do an awful lot for you. If they're equipped with something like Siri and somebody's living alone, they can look out for you. If you need to go from one part of your apartment or home to another, you can place your hand on the top of the robot, it'll make sure you get there safely, and you can ask it to do things.
For example, somebody who's totally locked in can use an OB robot with our technology and feed themselves from any one of four courses.
Speaker 1 08:22
So tell us about the OB technology.
Speaker 3 08:24
OB technology is a really outstanding device by a company called OB Robotics, developed in the United States. And what it does is it's basically a mechanical arm with a spoon on it attached by magnets. And on the base of the robot there are four dishes. You can put any course of a dinner in there. So your carer might put them in there, but you can then use your strip or your switch to control the arm so that it goes down and you can choose any one of those four dishes. And once you've chosen the dish, it will then bring it up to your mouth. And when you've finished eating it, it'll put the spoon back and choose the next. And it's a very incredibly sophisticated piece of tech, but it looks very basic. But it changes lives for people who can suddenly take control of dining.
Speaker 1 09:08
And it is about independence, isn't it?
Speaker 3
Yes. You basically need a character to set it up. So somebody's got to be there in the background. But what this does is it just allows you to time the food that's coming. Sometimes you want to wait a few minutes before you have the next mouth. Or just control that time and realise that you're in control and when you have a diagnosis where your control has slowly been diminished or taken away either over time - like with motor neurone disease, or suddenly as in for a stroke or a spinal cord injury, the control of that time is really vital to you for your own self-esteem, your own sense of dignity and independence and that's one of the key elements of the mission of this technology.
Speaker 1 09:49
Japan also has an aging population and presumably their strategy is technologies that are going to help them in old age.
Speaker 3 09:57
Well, in terms of the world market, the median age is slowly moving up towards 65 and past in some areas. Scary. Well, yes. But, you know, we've got two choices, you've got to get old or not. But the good thing about this is that people generally, first of all, don't realise, don't think of aging as a disability, but it is a form of disability and it's one that continues to evolve. And so the first thing you're concerned about is who's going to take care of me. And in Japan, very often it's the children, if you have children. And so, you know, mum or dad will move in with the family and the living is much smaller, much more confined in Japan, the apartments are smaller, houses are smaller.
And the other part of it is, if the kids are off working and they're adults, what's mum or dad going to do when they're at home by themselves? Until now, most people are familiar with the idea of the red button. You have a red button either hanging around your neck or it's on your wrist. Yes. And if you have a fall or a heart attack or a stroke, once you've hit the floor, to put it bluntly, the idea is you press the red button and either somebody will phone you or help will come depending on how the button is set up in the network. The problem is, once you have any of those events, and if you have Alzheimer's or any form of dementia, it becomes worse because there's a confusion about the button. You're probably going to be stunned for a short period of time and may even be unconscious for a significant period of time, which is a critical part of your care period, the golden hour.
Speaker 1 11:20
Hmm.
Speaker 3 11:21
With this device, what we are designing now, is it can already read your heart rate, it can already read muscle activity, it can already read with a three-axis force sensor, it's accelerometer basically, it can already read if you're falling at the speed of gravity, 9 .8 meters per second per second, or if you're just sitting down gently, you can tell the difference. And if it sees that difference, it can alert through your iPhone, your Android, or your computer, the help that you've nominated, ambulance, emergency room, your personal doctor, whatever. So before you hit the floor, it's already assessed what you're dealing with, and it's calling for the appropriate help. So even if you're unconscious or dazed, help has already been notified.
Speaker 1 12:01
So if somebody applies this, where do they put it on them?
Speaker 3 12:04
And remember, we're in development stage right next to us. We have a bionics lab in the States, which we built three years ago. And what we're looking at right now is essentially wearing it on the sternum, on that bone, right at the center of your chest, and just below your throat. And on there, we can already pick up heartbeat, temperature, body temperature, movement, and muscle state. So if your muscles suddenly go slack, for example, you have to presume you've either become unconscious or gone to sleep or possibly had a stroke.
And so we're starting to discriminate signals of those, that data we're getting from that sensor to say, well, this is a high probability, a 95% probability the person's had a stroke. Call for help.
Speaker 1 12:44
Now at the moment you can get watches that you wear that do tell you some of these things. What do you think about that in terms of what you're doing?
Speaker 3 12:50
I mean, I'm wearing an Apple Watch, and I haven't had a fall yet, but if I'm working out or I'm on my bike or whatever, occasionally it'll hit an alarm and say, we think you've had a fall, do you want to call for help? Basically, I'm not quoting that correctly. And so if I don't do anything, it'll call for help. I'm not sure who calls, actually, but call somebody. But it's a great idea, and full marks to Apple for doing that. And about eight years ago, Apple actually bought one of our devices to see how we could integrate with them. And that began a really good relationship that really has gone on for many, many years. In the meantime, they're also saying, well, we can pick up blood oximetry and heartbeat.
But if you read the fine print on their contract, they'll say, well, basically, I'm not quoting this accurately. But the idea is this is not for medical decisions. The reason being, as you can see, I'm moving my watch on my wrist, and you can see this bar. Yes. It's not picking up a particular site on a particularly stable part of the skin. With NeuroNode and NeuroStrip, it's sitting right on there. It's adhered to it, and it's picking up clinical-grade data. And that data is the data on which we can base, we think, high probability decisions on what's happening to the person who's wearing it.
Speaker 1 13:59
That was Peter Ford, founder of Control Bionics and inventor of NeuroStrip. It was the first of a two-part interview. Please tune in in two weeks' time. You have just been listening to Ablequest, a programme that looks at developments in assistive technology. From Barbara Sullivan and Marni Roper, thank you for listening and goodbye till our next program.