X
Tech

Salesforce: Privacy needs consensus

Salesforce research director Peter Coffee discusses the implications for privacy and cloud security of companies' increased use of social data
Written by Tom Espiner, Contributor

The vast volumes of personal data people are posting online via sites such as Facebook and Twitter are increasingly being used by companies to generate sales and to analyse marketing and brand perception.

Salesforce.com is one of the companies leading the charge in the use of social networks in business contexts.

Its Chatter product allows companies to grab sales contact information from social media and build social networks with customers, while its acquisition of Data.com enables its customers to pull crowd-sourced information into sales apps.

ZDNet UK caught up with Peter Coffee, Salesforce's director of platform research, to quiz him about customer reaction to the use of their data by business, and the cloud security implications of the increased flow of information.

Q: Are companies in danger of using too much information on customers, by combining information from different sources?
A: You can very quickly populate, or repopulate, a startlingly comprehensive picture of a customer, combining whatever they've decided to show of themselves on Twitter and Facebook.

This is automating what was already possible by manual means. It's an interesting question about whether the combination of data creates a qualitative change, but it would be a presumption of me to tell you how people feel about that, and there are many conversations on this subject.

Some commentators draw a distinction between older and younger users, and their views about the use of social-networking data. Yet the Office for National Statistics has found a large uptake of social networks by older users in the UK. What's your take on this?
There's often a crude distinction drawn between older and younger generations and attitudes to technology, but if you look at the actual demographics of Facebook adoption, it's not nearly so simple.

When people see value being returned to them as a result of making data available, the reaction is positive. Any perception that data is being extracted from them by surreptitious means, and they have an adverse reaction.

People try to paint it as a generational divide, but data contravenes the glib claim that youngsters "get it" and oldsters don't. I'm 54, and I have no trouble "getting it".

The degree to which people participate in frequent-purchaser programmes demonstrates that they know their data has value, and they view that as a fair exchange.

Do companies using information from social networks leave themselves open to the risk that the data they pull has been made available on an opt-out rather than opt-in basis, and that this may contravene data-protection principles?
What you're saying is, that we may inadvertently be passing data whose provenance is disreputable?

No, I'm asking whether companies are leaving themselves open to the risk that the data they pull may have been made available without the explicit consent of users.
We enumerate sources of data in our offerings, like Data.com with its crowd-sourced, community information. Jigsaw is a reputable operation, and our Radian6 offering makes quite visible its data sources. I don't think there's anything behind the curtain here. People who...

...use Data.com and Radian6 know what they're getting.

Facebook frequently introduces changes to settings that require users to opt out of sharing data. Are people fully in control of their data?
There are services whose mission is to make it easy to share things. It's logical that services like that will try to make it as easy as possible for customers to share in what could be described as opt-out. There is a saying that if the product is free, then you are probably the product.

I try to illuminate to my three sons the tools for sorting data, and they probably understand better than 90 percent of the population about options for sharing.

Arriving at a social consensus about what is appropriate is the real challenge, but it's not a technical challenge — it's a social challenge.

Our model is different. Chatter incorporates the most coherent and consistent and auditable model of privilege management that exists today in any IT management model. In older models every server created a separate and unique app to get it [privacy-]wrapped.

The trust model in Force.com allows you literally to check the boxes — and say, this kind of user can see this user, or create and delete instances. In the old model, determining privacy in offerings required you to read code. The Force.com model designates a layer of the platform in which apps seek permission instead of apps granting permission.

In a short period of time information is changing with astounding speed. Arriving at a social consensus about what is appropriate is the real challenge, but it's not a technical challenge — it's a social challenge. It's about what's socially appropriate, not what's technologically feasible.

I'd imagine the capabilities of combining information from different sources in Salesforce products such as Chatter would greatly interest law enforcement. Does Salesforce work with the authorities in that area?
I'm not in a position to answer questions about law-enforcement requests for information. You would have to talk to our privacy counsel. What I can say, what I personally believe, is that it is mathematically impossible to satisfy every existing regulation concerning data privacy and co-operating with law-enforcement requests. It is not possible to satisfy every statute and regulation. You need to be able to defend an action as a responsible [organisation].

We make every effort to participate in conversations about the evolving nature of data governance so our customers can have confidence that working with us will ideally strengthen their position in any conversation about good data practices.

Have cloud services changed the data-protection landscape? For example, would Wikileaks have been able to get hold of the Cablegate documents as easily?
The Wikileaks scenario... If a government agency stores historical collections of records in one database, and someone in a single day downloaded 250,000 records, that would be detectable and reportable in our environment in a way that's not possible in most IT environments.

Cloud services enable detection and recognition that are not normally possible. Old technology pretty much required you to rely on your perimeter. In the new model we are all outside the wall.

Rather than denying or granting privilege we want a precise and granular ability to know how privilege is being used. In traditional IT models, privilege, once granted, allows you to do whatever they want you to do. What's interesting about the cloud, which in many ways is superior, is the ability to gauge how information was used. So often, that's what you want to know.

When a valued employee leaves to go a competitor, it's invaluable to know what exactly that employee had been looking at before leaving. That's not a hypothetical situation — with one company, audit histories gave them invaluable insight into which accounts had been looked at over the previous three weeks [before an employee left], and gave them a view of the risks.

Infrastructure clouds preserve all the defects of decade-old approaches to security, and those defects have increased as the threat environment has evolved. The notion of building a trust platform is something that all too often is not adequately emphasised.


Get the latest technology news and analysis, blogs and reviews delivered directly to your inbox with ZDNet UK's newsletters.
Editorial standards