Ted Simons: Tonight's focus on Arizona technology and innovation looks at a mobile eye tracking device developed in collaboration between the learning and research center at ASU and Massively Parallel Technologies, a Scottsdale based high-tech firm. Kevin Howard is with Massively Parallel Technologies. We talk about the new software. Good to have you here.
Kevin Howard: Thank you for having me.
Ted Simons: We're talking visual engagement mapping.
Kevin Howard: Yes.
Ted Simons: What does that mean?
Kevin Howard: Let me go back a little bit. The teachers college was really one of the drivers of this innovation, but visual engagement means from looking at your eyes we're able to tell whether or not you're getting the material being presented to you. That's in a nutshell what we're trying to do.
Ted Simons: That you're understanding what you're looking at, what you're reading?
Kevin Howard: Whether you're playing a video game that is trying to teach you something, reading some content or just interacting with an individual. How your eyes behave can be tracked.
Ted Simons: Give us an example. I think we have video of a kid doing something on a screen here. What are we looking at in his eyes?
Kevin Howard: You're looking at placement of the eyes, dwell time, how it's scanning. Between the child, the game and we have a supercomputer, 56 cores, a reasonable size machine, it's actually taking a map of where the eye ought to look based upon what the game is trying to present to the child, and determining from how long the child looks at various parts of the screen and how long they should be and where their eyes are looking whether or not the child is just lackadaisically doing things mechanically or whether or not they are actually actively engaged in picking up the information.
Ted Simons: When it comes to online learning the screen would be able to figure out if whoever is watching this particular online lesson is attentive or actually -- how do you differentiate? Some folks are barely paying attention but are retaining everything.
Kevin Howard: That's actually a reasonably small percentage of the population over time. What you want to do is say on the averages, especially for smaller kids, are they engaged. Adults are much better at being able to look like they are not looking at anything or paying attention and actually are paying attention. Children not so much. They don't have those same facilities.
Ted Simons: This is developed for -- it's mobile. Talking cellphones and tablets?
Kevin Howard: Anything that's mobile. The whole world is going mobile or already there for many people. The problem is that you might have seen eye tracking stuff on some cellphones. But that's not what we're talking about. We're talking about being able to analyze not just the eye position but all kinds of other factors and do that in real time regardless of the computing power of your mobile device. Say you had your old flip phone. If it happened to have a camera on it and they had an app on there that they were learning something from we could use that because the computing power of that device is not driving it.
Ted Simons: Interesting. You're using -- I'm getting it's called blue cheetah software. Is that the software that goes into everything?
Kevin Howard: No. That's actually in a supercomputer way away from the mobile device. There's a tiny little piece of blue cheetah that is on each of the devices, but it's tiny because all the computing is done in the cloud. It's not done in the device.
Ted Simons: Okay.
Kevin Howard: So think about what that means. That means you can literally have people all over the place, multiple people doing massively multi-player learning games, and the system is tracking all of them simultaneously and getting the merged effects of what the learning experience actually is.
Ted Simons: You work with ASU technology based learning research center. Talk about that collaboration.
Kevin Howard: It's been a wonderful collaboration. Paul Skiera and myself we share grad students. They make video games for teaching. They make not just video games but a plethora of mobile training devices and tools. What we added to the party was the ability to take massive amounts of data, analyze it in real time and present that information back to the mobile device so that divisions can be easily made on the mobile device without the device itself having to do the calculations.
Ted Simons: We could basically have a situation where I'm taking an online class, I think, I got this, I'm going to the next chapter, the computer says, no, you don't.
Kevin Howard: Better than that. As you're going through the material the system can recognize, wait a minute, this person doesn't like to read. If you have four lines the person's reading starts slowing down. They start drifting off. Instead of presenting it as large blocks of text maybe we have to present it differently. The system can then present the information in a way that learns how you actually learn in real time and then enables you to have the experience.
Ted Simons: That really is something. That is the next generation of learning devices it would seem.
Kevin Howard: Well, we believe that it is. You can't really do that on an individual device by individual device basis. You have to have a cloud component that allows all of these pieces and parts, devices to come alive together.
Ted Simons: Massively parallel technologies. You're based in Scottsdale.
Kevin Howard: All our engineering is done in Scottsdale. We have a branch office in Colorado. But what we do is high performance computing, large scale government class huge computing systems. What would you use these for? Weather modeling for example. We try to take cutting edge science and computer technology and instead of applying it to the more traditional high performance computing, oil exploration and the like, we said, what could we take this technology and enhance the learning experience of our children by taking very high-tech knowledge but presenting it in a way that is seamless and invisible to the child. If it's invisible all the child knows is the experience is better. They have no idea that the reason the experience is better is because the system is understanding how they actually learn.
Ted Simons: If they did have an idea they would know I better pay attention or the computer will be messing with me.
Kevin Howard: Or learning what's going on.
Ted Simons: What's next for your company?
Kevin Howard: Well, for this engagement we're going beyond just the mobile devices to whole classrooms. Because we have large scale computing behind us we can cheaply put a few cameras in a classroom, very inexpensive nowadays, determine the engagement level of the entire class and then feed in real time to the teacher whether or not which percentage of the classroom they are losing. In real time so they can automatically adjust.
Ted Simons: This is remarkable stuff. Good to have you here. Thank you so much. We appreciate it.
Kevin Howard: Thank you.