The iPhone security model is broken ... can it be fixed?

The iPhone security model is broken ... can it be fixed?

Summary: User data is the new gold rush, and it's so easy to find and mine.


I like my iPhone. A lot. But I've not gotten to the point where I feel that the security model that Apple chose to implement in iOS is broken, and it's hard to see how it can be fixed in any useful or meaningful way.

How is the security model broken? Well, it's broken because the apps you install onto your iDevices are capable of accessing the user's address book and sending that data back to the company servers without you knowing that it's going on. Mobile social network Path was caught doing just that, and I'm sure that it's not the only one that's been up to this trick. User data is the new gold rush, and it's so easy to find and mine.

Note: Mac OS X offers developers easy access to the address book, and Apple hasn't done anything about this since the issue surfaced in 2006.

Now I'm going to assume for a moment that there are legitimate reasons for access user's address books and copying them, but what cannot be justified is doing this without user consent (and by consent I don't mean a small snippet of legalese buried in a ocean of legalese). Harvesting data without clearly informing the user of what's going to be done and what will happen to that data is at best a very bad business practice, and at worse it's malware-like behavior and a massive breach of trust.

So what should happen? Well, I have several ideas, but I must admit that I'm not in love any of them.

  • Apple could ban apps that access the address book. This would be easy to do as Apple controls what APIs developers can use, and checks for developers breaking the rules. While Apple could do this easily, but it's not an ideal situation because some apps could have legitimate reasons for accessing this data.
  • Apple could restrict how much data apps can access. Problem with this is that it doesn't give users much control. It's too blackbox and too opaque.
  • Apple could put policies in place to force apps to use encryption when transmitting the data, but personally I'm more concerned about what happens to that data after transmission than during transmission.
  • Apple could put a mechanism in place similar to that for Locations Services where apps would have to ask permission and users could revoke permission from the app later. Of all the options this seems like he best, but it does have a danger in that it could eventually mean that iOS users are faced with endless dialog boxes and a torrent of questions each time they install apps. This sort of security hasn't worked on any platform previously, and I'm not convinced that it would work on iOS.

As I said, none of these solutions are ideal, but in light of recent developments, it's clear that Apple can't just allow apps to have unfettered access to data stored on iOS devices. We're already sliding down a very slippery slope.

The best option in my opinion is to put the users in charge, but I see there being a giant gulf between giving the users choice, and the users making an informed choice. On platforms like Windows (and even on Mac) throwing endless dialog boxes at users quickly creates a fatigue where people don't really read (or even pay attention to) the information being put in front of them. Security turns from being a useful feature into something that's just standing between them and doing what they want.

These aren't new problems, but they're exaggerated by post-PC devices. In the PC world there's a huge amount of diversity when it comes to software. People could have their contacts in one (or many) of dozens of places (Outlook, Thunderbird, in the cloud, in a Notepad file ...), but on a device like the iPhone there's one place ... the Contacts app. Also, as our devices become more personal, they will contain more and more personal data (names, emails addresses, phone numbers, addresses, and so on).

All this makes the data easy pickings ... and this data is valuable stuff, so there are people who will grab it.

The move to the post-PC world is putting out personal data at risk, and no one has come up with a solution that protects us from the bad guys.

Topics: Mobility, Apple, Hardware, iPhone, Mobile OS, Security, Smartphones

Kick off your day with ZDNet's daily email newsletter. It's the freshest tech news and opinion, served hot. Get it.


Log in or register to join the discussion
  • RE: The iPhone security model is broken ... can it be fixed?

    I much prefer Apple's security model than my Android's!
    • RE: The iPhone security model is broken ... can it be fixed?


      I like Android's model in concept, but in practice I prefer Apple's. The problem with Android's model is it requires up front permissions. This is great in concept, as it gives the user an opportunity to avoid installing software for which they're not comfortable with permissions, but in practice, it's sometimes difficult to understand why the app needs certain permissions before you've actually used it. This is in part because sometimes the app descriptions are lacking, or it's simply complex enough that the user doesn't understand why it's needed or see the benefit. The end result is either users walking away from a good app or users just starting to blindly accept apps regardless of permissions required. Either way, it's a problem.
      • RE: The iPhone security model is broken ... can it be fixed?

        @piousmonk Exactly right - and I just did that the other day (cancelled installation an app because I couldn't see what it needed such broad access for).

        On the one hand, I guess it's up to me to do a little work and figure it out - but on the other, I didn't feel like just then, so I cancelled the install.

        But I'd rather be told and have the option to exercise if I can, than not to know at all.
      • Developer's good practices for iOS and Android

        I think that's the developer's job when publishing the app, the developer must include with the app resume what permissions the app needs and why.
      • RE: The iPhone security model is broken ... can it be fixed?


        I don't disagree, but in practice, too often descriptions of what the app does, let alone why it needs certain permissions, is vague at best. In the case of malicious apps, I'm sure this is intentional. In the case of legit apps, it could simply be the case of the developer not being well suited for conveying technical details to a non-technical user. But either way, the end result is the same, and until addressed, many users will fall into one of the two camps I mentioned.
  • RE: The iPhone security model is broken ... can it be fixed?

    Spell check next time please - "Of all the options this seems like he best" should be "Of all the options this seems like THE best."<br><br>Unfortunately, the best solution is one policy that can be configured easily like *gasp* Facebook's privacy policy. It's far more scrutinized than Apple is as nobody cares as long as their iPhone works, but privacy advocates nonetheless should be upset again about this ongoing problem of ease of access vs. complete control of your privacy.<br><br>For far too long users have put up with companies providing them services and giving the companies free data- and will continue to do so unless we stand up and make a big enough financial impact on the companies doing the harvesting that they have no choice but to listen.
    • RE: The iPhone security model is broken ... can it be fixed?


      Pedantically, I'd just like to say that a spell check won't find any error with that sentence.
      • RE: The iPhone security model is broken ... can it be fixed?

        I thought that spell checks were used by Harry Potter. Perhaps you mean a spelling checker?
  • RE: The iPhone security model is broken ... can it be fixed?

    I think there needs to be a 5th option. For example, I can think of apps that might need to access the email of my contacts - but not their phone numbers. Or needs their twitter details but nothing else... You get the idea.<br><br>What I think needs to happen is the first time the app makes such a request the OS shows "This is what the app wants to see: is this OK? [always] [this time] [never]"<br><br>App developers would soon start to curtail their requests if such dialogs started prompting users to uninstall the app.<br><br>I don't see this is a "broken" security model - there is isolation, it just isn't doing much at present. It seems clear that running through the API is the right way to do this as it allows for modification of the behaviour.

    Don't think my "splitting of hairs" is any justification for inaction on Apple's part - it's actually quite the reverse: Apple CAN fix this problem, and now is the time to do so.
  • RE: The iPhone security model is broken ... can it be fixed?

    You need to correct this typo:
    "But I???ve not gotten to the point" which I believe should have said "now". Sort of changes the premise of the whole piece at the start.
  • RE: The iPhone security model is broken ... can it be fixed?

    Well android is no better! Since when does an app for a clock need access to your address book?
    • RE: The iPhone security model is broken ... can it be fixed?

      @phoenix144 <br><br>Good point. The security model ought to include these buttons:<br><br>Yes, No, AND<br><br>Permission Flag - as in 'What the hell is going on here with this clock app?"<br><br>The flag would report the app and the weird permission request back to Apple for review and possible app removal.
      pk de cville
      • Permission Flag

        @pk de cville

        That's a great idea! When large amounts of users start flagging an app it would force honest developers to start explaining why the app accesses what it does.
  • RE: The iPhone security model is broken ... can it be fixed?

    Put APIs in place for the most common actions you might want to do with that data - i.e. phone, SMS, iMessage, send mail to, etc, that don't expose the actual data.

    There IS some value in letting apps scan all my contacts (i.e. matching friends on Game Centre, LinkedIn and equivalents) - but this should be something that always prompts the user, and I'd say that accessing this API should require a higher level of certificate (which would trigger Apple into looking closely at the firm / what it does with the data).

    Even better would be all the major vendors collaborating on some kind of standard for anonymised contact matching that worked by transmitting hashed contact data only.

    I can't think of any legitimate situation where a server should have the email address and telephone numbers of my contacts.
    • but this should be something that always prompts the user

      You mean like User Access Control (UAC) in Windows Vista? That did not get a great reception in the marketplace.
      Marc Jellinek
  • RE: The iPhone security model is broken ... can it be fixed?

    While the gist of what you are trying to say is accurate, this is hardly an "Apple security model" problem. It is a core dilemma in securing any networked device. The problem is magnified with the proliferation of smart phones, tablets and other highly portable gadgets. The conflict will always be between making device security "user proof" at the expense of user choice and control versus placing the onus on the user (assuming transparency and configurability) at the expense of lots of unsecured devices due to user ignorance or apathy.

    I tend to come down on the side of user responsibility but have to admit that there are problems with this approach, not the least of which is the fact that insecure devices affect more that just the owner of that device. Zombies and botnets leap to mind. It is almost a public health metaphor. If you are allowed the freedom to forgo vaccinations, then to what extent are you responsible for communicable diseases that you might unknowingly spread?

    It is a complex issue and way beyond a single vendor's control.
  • RE: The iPhone security model is broken ... can it be fixed?

    Jailbreak your iPhone, then you can add various methods to protect your data and know exactly what is being sent and either allow or not allow it to be sent.
    • RE: The iPhone security model is broken ... can it be fixed?

      @cmwade1977 Please explain.
  • RE: The iPhone security model is broken ... can it be fixed?

    Since when do Apple fanbois concern themselves with such mundane topics as security or privacy? Apple says they are the best, and there are no problems, so that is all they need to hear.
  • RE: The iPhone security model is broken ... can it be fixed?

    Once your data is leeched, it's out there. They need clear opt-in's and an expiry time on personal data.
    I believe the EU is looking at this structure.