How can AI help us understand word finding?

Happy Thanksgiving!  Perhaps you are in the midst of dealing with relatives who have converged on your home and you are seeking a moment of calm.  Or perhaps, like I am, you are avoiding the next step in cooking a turkey dinner.  However Thanksgiving finds you, I hope you are able to take a moment and reflect on the many blessings we have all received.  If you are a regular reader, you know that I am a neuroscience geek.  This article piqued my interest!

Thanks to my daughter’s Twitter tweet, I just read a fascinating article from Technology Review about Artificial Intelligence, or AI (see link below).  Neuroscientists have taught us about how language is processed in the brain.  We’ve learned about neurons being stimulated in “neighborhoods.”  We’ve seen how the lemma is connected to a syntactic form and associated with a phonological representation, which may or may not be converted to a successful oral utterance.  There are steps along the way that may interfere with communication.  But what can technology teach us about this process?

The article explains how the “deep learning” of AI is understood by understanding “backprop.”  Geoffrey Hinton of Toronto is considered the father of “deep learning.”  The article explained “backprop” in this way:

The author, James Somers, uses the example of how AI recognizes a picture of a hotdog! “The way it works is that you start with the last two neurons, and figure out just how wrong they were: how much of a difference is there between what the excitement numbers should have been and what they actually were? When that’s done, you take a look at each of the connections leading into those neurons—the ones in the next lower layer—and figure out their contribution to the error. You keep doing this until you’ve gone all the way to the first set of connections, at the very bottom of the network. At that point you know how much each individual connection contributed to the overall error, and in a final step, you change each of the weights in the direction that best reduces the error overall. The technique is called “backpropagation” because you are “propagating” errors back (or down) through the network, starting from the output.”*

Perhaps if we learn more about the mathematical models used in AI we will understand better how our own brains produce speech.  Can we backprop errors and figure out why we said, “mashed tomatoes”   instead of “mashed potatoes”?  What can we learn from technology that will help speech/language pathologists treat our patients more efficiently? And what can technology learn from human communication?  Perhaps another question is how can human communication teach technology about understanding and producing language?

* https://www.technologyreview.com/s/608911/is-ai-riding-a-one-trick-pony/

 

 

Leave a Reply

Your email address will not be published. Required fields are marked *