In the article “Metacrap,” Cory Doctorow discusses obstacles to perfect metadata creation. To sum up, the problem is that we are only human and prone to mistakes in metadata creation due to ignorance of the topic, laziness instead of thoroughness, and bias. Also, some humans may have intentions other than connecting the perfect resource with its seeker; people will lie. So much for every resource finding its reader. Doctorow determines that computer generated, observational metadata is more reliable than human-entered data.
While one cannot deny that what Doctorow says has truth—humans can screw things up—he may be missing an element of metadata creation where we humans could best contribute: analyzing context.
For example, I recently went to hear a lecture by a Shakespeare scholar discussing her work on a digital humanities project, digitizing and XML coding according to TEI standards the text of Shakespeare plays. The semantic labeling of different elements in the plays needed more than word recognition. It needed an expert scholar to interpret the text in context (context of the work and context of history) to determine the most accurate labeling of elements. It required judgement calls. Now, are these judgment calls by the scholar free of bias? No. While the scholar would not intentionally insert bias, there is always the possibility of inserting a particular worldview when adding any notation to a text. This snapshot of her interpretation and worldview, however, could offer another layer of metadata and help future scholars understand not only the text but how it was viewed. This is certainly value-added metadata, even if not perfect.