It's funny how often, once one starts thinking about a subject, one finds examples of it absolutely everywhere. I've been thinking about user-contributed metadata for a while now in the context of a digital music library project, where we can provide innovative types of searching, if only we could find a way to make the creation of the robust metadata that drives it cost-effective. I wrote about this topic recently, inspired by OCLC's Wiki WorldCat pilot service.
So imagine my pleasure when, catching up on my reading this weekend, I came across "Social Terminology Enhancement through Vernacular Engagement" by David Bearman and Jennifer Trant in September's D-Lib Magazine. (Yes, I do know it's no longer September. Thanks for asking.) I'm thrilled to hear about this initiative, especially how well-developed it seems to be. I haven't yet followed the citations in the article to read any of the project documentation, but it certainly looks extensive. In the digital library (and museum!) world, I firmly believe ongoing documentation such as this associated with a project can be of as much or even more value than formally-published reports.
Two features strike me about the "Steve" system described here, that make it clear to me there are many ways to implement systems collecting metadata from users. It also makes me realize these decisions need to be made at the very beginning of a project, as they drive all other implementation decisions. The first is an assumption that the user interacting with the system is charged with the task of description rather than simply reacting to something they see and perceive either as an error or an omission. The user is interacting with the system for the purpose of contributing metadata; finding resources relevant to an information need is not the point. I suppose different users end up contributing with this model than with one that allows users to comment casually on resources they find in the course of doing other work. Different users might affect the "authoritativeness" of the metadata being contributed, but I wonder to what degree.
The second feature I find notable is that the system is designed to be folksonomic; there is no attempt at vocabulary control. Us library folk tend to start from the assumption that controlled vocabulary is better than uncontrolled and move on from there. At first glance, some of the reports from this project seem to resist that assumption, and start from the beginning looking for a real comparison. I'm anxious to read on.