I was at the LREC 2006 conference in Genoa, and one recurring theme was the use of software because there are not enough people to do things manually. Some things a computer can do well, some things a computer does not so well. You are often presented with a percentage where the computer is off from what a human would do.
One presentation, the price winners presentation at the end of the conference mentioned a nice scheme were two concepts were compared and the question was, to what extend is the first concept associated with a second. People doing this are trained with a first set of concepts, they are then asked to do a further set of concepts to see to what extend they have learned things and then .. They are off. This worked really well but, you need a large group of volunteers or you use a computer. The computer did either a good job or gave COMPLETELY different answers from what a human would do (these are the things to watch our for in a Turing test).
My idea is, when WiktionaryZ gets itself a large community of people interested in languages, it would also be natural to ask this community if they are interested in helping out with research. One strategy would be to have just one person check a machine derived result, when there is a discrepancy, have some more people look at it ..
Another interesting experiment would be to test the difference between the different groups of users of English including people who use English as a second language.. What do you think ???