From your quick glance at linguistics and grammar research, dependency grammars seem to be the only sensible way of representing language. Traditional “parsing” that you learn in school really doesn’t do justice to the complexity and nuance of language — humans aren’t LR parsers, but rather, complex pattern matchers. Dependency grammars explain almost all concepts of language incredibly well, and constructs that don’t make sense with the traditional model, such as content clauses, di-transitive and doubly transitive verbs, copulas, and adverbs. It’s the closest thing you have to a parse tree of the English language, instead of some forced, stupid, crazy tree from tokens.

You read a paper today about stylometry — A Stylometry System for Authenticating Students Taking Online Tests — and you realize how syntactical, rather than semantical, the text analysis was. Writers not only spell differently and use different words — they use varied structure, different tenses, grammatical structures, and they employ creative ways of doing their own thing.

You realize, depedency grammars are absolutely perfect for this. Working with dependency grammar trees should look vaguely like working with AST’s.

All that’s left is to try it.