A few weeks ago in class, we discussed a bank of therapy session recordings that was available to us from a past study that was conducted by one of my professors. Unfortunately for the lazy researcher, the recordings were not transcribed. I remembered my days as a research assistant, transcribing interviews, and vowed that I will transcribe no more.
So I wrote NICE Systems, a company that develops client relations software (among other things). Already 10 years ago I visited a presentation by their R&D department that described their ability to analyze a conversation structurally, and even recognize some words here and there. I am sure the capabilities have improved since.
I wish I could extract quantitative data from such an audio archive and be able to cross that data with therapy results that were measured by questionnaires during the original research.
They say Google Chrome's next edition is going to have speech recognition in Hebrew, that could be a brilliant update... but didn't find anything ready for use right now. Still experimenting with partial stuff I found though.
BTW - Here's something about an app by Madrid Uni about emotions recognition in conversation, based on speech rate and pitch and volume. They used something called fuzzy logic that uses relative values in conditions in order to draw conclusions.