Have you ever played whack-a-mole… without your hands? Spoken a word without your mouth? I had the coolest experience this weekend at the TAASLP (Tennessee Association for Audiologists and Speech-Language Pathologists) convention in Murfreesboro, TN. That translates to 2 days in a swanky hotel with free chocolate chip cookies and an all day coffee bar and the opportunity to hear professionals talk about what they do best.
There are 3 rooms of speakers that sort of break up topically like this: schools, audiology, and medical. I spent my entire weekend in the medical room learning about voice, brain damage, dysphagia, paradoxical vocal fold movement and chronic cough. I saw some really scary videos of silent aspiration and a little girl with PVFM who would wake up every night at 1 am unable to inhale. She’s better, by the way.
But it was really great for me because in a field as incredibly broad as speech-language pathology, I’m starting to feel more confident that I can narrow down my focus and pick a sub-field that I really really love. It’s medical. It’s voice, especially.
But you’re probably wondering why I mentioned whack-a-mole. On the first day at lunch, I befriended a woman from Pittsburgh, a vendor who was there to show off the latest and greatest in AAC (augmentative and alternative communication) devices. I stopped by her booth later with some friends and she let us play with her $14,000 toys.
Seriously, it was like science fiction. Taking turns in front of the device, an eye gaze sensor would distinguish our eyes from all the other visual clutter. Then we’d run a calibration exercise. We’d watch a dot zig-zag across the screen as the computer watched us back, learning how to read us. Then we got to feel like superheroes shooting laser beams out our eyes with calibration games like whack-a-mole. Little rodents would pop up on the screen and when we looked directly at them, the hammer would fall. I’m just saying, if I were a quadriplegic without access to typical video games, I’d probably eat that up. Then, more functionally, we pulled up an alphabet screen and “typed” words just by looking at the letters we wanted to use. There was a backspace for mistakes and auto-fill suggestions just like on most cell phones that sped up the whole process. Then when you’re ready, you look at a button and the computer speaks your sentence for you. It’s not the same as your natural voice of course, but it’s not completely robotic either. Synthetic voice technologies are improving.
But this is what’s so great: so long as you have the cognitive ability to use the device, generative communication is right there at your
fingertips eye gaze. You can say absolutely anything you want without relying on a pre-programmed set of utterances (like so many AAC users do), and no programmer has to guess at the sorts of things you’re interested in saying. I just loved it. Not for everyone, of course, but for the people it can help… I imagine it helps a lot.
Completely unrelated (sorry, hope you weren’t expecting a nice segue), here is a lovely avocado veggie dip (because this is still a cooking blog, right?) created a couple of weeks ago.
- 1 ripe avocado
- 1/3 cup plain yogurt
- 1 teaspoon dried basil
- 1 clove of garlic, minced
- 1 teaspoon apple cider vinegar
- 1/4 teaspoon salt
- 1/8 teaspoon ground black pepper
Blend all ingredients until smooth. Serve at room temperature, or chilled.