Semantic analysis of Web content has replaced contributors

The University of San Francisco has stopped using its own writers and is now relying upon machines to fetch relevant content from the Web
Written by David Worthington, Contributor on

An automated content curation system has delivered better traffic results for a university’s blog in months than a social media strategy provided over years.

The University of San Francisco (USF) deployed Scoop.it as a replacement for a school themed blog earlier this year. The original site contained a combination of RSS feeds and contributed content. That iteration received 30k pageviews in three years; the revamped site surpassed 77k views within the first 6 months.

Scoop.it uses semantic analysis – machine understanding of the context of many articles distributed around the Web – combined with human editors to scour for relevant content. The university’s communications team selects suggested stories and adds its own remarks to each posting. It does that to add local flare.

Paper.Li and Storify package content in a similar way, but none of these services function very well on autopilot. Interesting sites that are useful to an audience gain a following. Content management tools are just that – tools. An editor is still required to oversee what’s being published, and can add meaningful insights.

I still believe that corporate blogs are better when they are focused on answering difficult questions – that’s true thought leadership. However, SFO’s case has demonstrated that creating a well-curated blog could also be effective as an inbound marketing strategy.

This post was originally published on Smartplanet.com

Editorial standards