"Well, as of this moment, they're on double-secret probation!"
Dean Wormer, Faber College
Recently I have had a number of conversations regarding the role of pre-moderation of internal social networks. Just by way of explanation, pre-moderation would be the approval of all content (posts and comments) prior to posting. Over the past several years and hundreds of conversations with enterprise clients, this has rarely come up.
Just to be clear, there is risk associated with enterprise social networking. There is nothing about social technologies that precludes requirements for privacy, security, maintenance of intellectual capital, regulatory compliance, etc. However, given the right degree of attention, these all are manageable. In fact, over time, social technologies will reduce the risk associated with all of these (more on that later).
OK, so if anyone can say anything at anytime, that's risky right? Well, in thoery, but in reality, not really. Remember, we're talking about internal social networks. Presumably, these are IT sanctioned, authenticated solutions. In other words, everyone knows who you are. And, we can assume that with some degree of planning and education, your users will be aware of the policies that govern the environment. And if you post something not within policy, well you get put on probation (or maybe double-secret probation). Animal House references aside, many a fine internal social networking policy begins with "don't do anything that will get you fired".
There are three key points here:
- One, provide a sanctioned solution for your organization because if you don't they may well find something on their own and that could be a whole different kind of trouble.
- Two, take the time and effort to develop and apply policies to the new environment. Don't get worked up, most of the policies probably already exist. We already went through this when we rolled out email and intranets and most of the logic in those policies can be re-purposed here.
- Three, go make friends with security, HR and legal. Don't worry. Here is the recipe to get them on board. Brainy Cheryl McKinnon, CMO of Nuxeo often refers to a quote by brainy Chief Justice Lewis Brandeis, "Sunshine is the best antiseptic." In other words, isn't it better to know if something is wrong and fix it? Here's another spin. We're probably better off if something is posted to a social network and removed than if it sits hidden in an off-line email file. When the scandal hits the front page of the Wall Street Journal, the smoking gun is almost always in email.
All of this points to an environment that, with appropriate planning, is largely self managing. To further reduce risk through pre-moderation can be very costly. There is a large human capital cost associated with human intervention and approval of all content. The cost is perhaps higher in terms of the friction it creates in the social network that could cause frustration and gate adoption.
Now, how is that social media will ultimately lower risk associated with content? Well, if these systems are transparent and programmable, which they are, then we can envision a world where content can be monitored in real-time as it's posted. With smart vendors already working on applying policy and text analytic engines with rules engines there will emerge solutions that can monitor content and communications based on user roles and access control. The solutions will be able to identify the rare instances where a breach has occurred, quarantine the content and notify an administrator to handle the exception. Assuming exceptions will be rare, the cost in human time to intervene will be minimal.
However, even with today's enterprise social solutions, I have rarley run across the need for pre-moderation to reduce risk. Even in working with organizations with very stringent requirements for information management like defense contractors, government agencies, pharmas, and healthcare providers, social networks have been managed through a combination of policy, self-policing and post-moderation.
All that said, I recently came across a compelling case for pre-moderation.
Are you aware of internal social networks that are using pre-moderation? If so, does the cost/risk/benefit analysis justify the direction?