Trustworthiness of messy research: using research audit?
Annette Markham, here (emphasis is mine):
Years of studying, utilizing and teaching many methodological approaches helped me realize two important things about qualitative enquiry. First, very few textbooks detail the actual process of doing research, including all the activities that disappear in the published report, such as making mistakes, revising research questions, changing the method of analysis, and other emergent activities inherent to qualitative enquiry. Second, what we call simply "method" is actually a multilayered set of inductive and non linear processes, guided by the context and research questions. The challenge is stopping at critical moments or junctures in the project to reflect on what one is actually doing so as to: find a good fit between one's activities and one's theoretical premises, balance learned procedure and new contexts, and alter methods of interpretation so to better suit the contingencies of the situation.
One of my long-term frustrations with scientific methods is that many of those (the easier to defend ones) require a plan-ahead structured way for analyzing your data. Even with more exploratory methods, eliminating the "plan-ahead" element to a degree (e.g. grounded theory), the analysis stage still requires using a specific set of procedures and rules.
Somehow for me this doesn't fit the way I see (experience and read in papers) how knowledge is constructed (in a broader sense, not necessarily as part of scientific tradition) – with a space for uncertainty, implicitness, intuition, recognising patterns in a mess, conversations and dependencies. Scientific knowledge, at least in complex domains, shouldn't be that different.
However, with scientific knowledge you want to make sure that the results of a study you lay down on a paper are also trustworthy. One way to do it is to use a proven methodology, following the steps that others took and successfully defended. In this case you have to prove that you had a good reasons to go for a particular methodology and that you followed it well (or deviated for reason).
Another option (as I has been told by multiple professors ;) is to be very transparent about what and how you did, allowing others to judge by the detail. Taken to the extreme, that would involve research audit*, where an external auditor can examine the decisions and actions of a researcher.
In my case that would be a way out, however, there are a few issues around it.
- Confidentiality: there is data I can't share (due to legal or ethical considerations).
- In my actual process (especially in the data analysis part of it) there are many unstructured, intuition-based, non-linear decisions and actions that are difficult to describe in a linear document in a way that allows examination by an external researcher.
- I'm not sure about the return of investment of time spent crafting an audit trail, especially given that lot of things has been documented – online in my weblog and in various working documents.
I'm thinking of comparative alternatives, but that would take a bit more time...
*If you want to know more about research audit you can start from the work of Sanne Akkerman:
- the references and the audit procedure in detail - Akkerman, S., Admiraal, W., Brekelmans, M., & Oost, H. (in press). Auditing quality of research in social sciences. Quality & Quantity - .pdf
- examples of audit trail and auditor report (part of Sanne's PhD research)