Rebuilding windows for the 21st century

Rebuilding windows for the 21st century

Summary: Microsoft isn't backing away from the challenges of cloud computing either in their data centers or, more importantly, their flagship Windows operating system. In a data-driven world they're getting Windows ready for the next 20 years.

TOPICS: Storage

I'm at the Storage Developer Conference in Silicon Valley, where more than 400 storage developers have gathered to learn about the latest in data storage engineering.

This is a yeasty time in data storage and the breadth and depth of the presentations reflect that. Microsoft is here in force, which may surprise those who don't consider them a player in storage. But with all of the improvements in SMB 3.0 and the investments they're making in ReFS and Storage Spaces, they may be the most significant storage company at the conference.

I spoke to one of the core storage teams after one of their presentations yesterday. Here is some of what I learned.

  • This team is focused on data center and higher-end storage. Think cloud scale.
  • They've adopted the Google model of assuming unreliable devices and placing software-based resilency on top of them.
  • At data center scale, management is the major cost driver because things are always failing, so self-management and healing is key.
  • The worst case - non-recoverable data loss - will happen, so it's critical to minimize impact on the rest of the data center. Recovery operations work in the background and the volume stays online.
  • Software testing incorporating work highlighted in Storage Bits 5 years ago (see How Microsoft puts your data at risk) means a more reliable and consistent file system.

The Storage Bits take

ReFS and Storage Spaces are impressive technical achievements. They've installed state-of-the-art plumbing while retaining application compatibility.

Almost as impressive is Microsoft's openness. They're here in force at SDC. They're working closely with 3rd parties on SMB 3.0 compatibility. There's an API that enables direct inspection of all the data copies in ReFS.

This is a much friendlier Microsoft.

Most important though - and I hope this is filtering through to the rest of the commpany - is that they're going back to being a technology company: building great products that win in the market place because they're better, instead of riding on Microsoft's huge market power. That's good for all of us.

Comments welcome, of course.

Topic: Storage

Kick off your day with ZDNet's daily email newsletter. It's the freshest tech news and opinion, served hot. Get it.


Log in or register to join the discussion
  • Who really thinks Windows will be a player in 2032?

    Really? We've been through many OS changes over the years and the trend shows Windows decreasing marketshare in recent times....

    So I seriously doubt we will see Windows' monopoly in 2032. It may still be around but will be a niche player.
    • Windows OS

      omg it decreased by about a percent :P
      • 1%

        1% drop in the datacentre would be a huge drop for MS. I assume you think that number of installed PCs under people's desks and in the corner by their TV = datacentre, but when it comes to server stuff, Microsoft has more of an 'aspirational' marketshare.

        I mean, they finally have StorageSpaces and ReFS... stuff that's been on big-iron BSD and Unix systems for many years - ZFS came out of Sun in 2004, Microsoft is just starting to think about playing catch-up. Still, it'll be good if MS reverts to being a technology company again, though I reckon that'll just be hype and talk until Ballmer and some other execs leave.
        • No actually Windows has a nice large server market share and

          guess what, it's steadily growing. And they're well beyond thinking about or catching up.
          Johnny Vegas
          • ehrm...

            Actually, MS' marketshare is growing, but much slower than Linux' share. Linux was at 17% 2 years ago, it is over 22 now - MS is at 47, which is of course still much larger.

            Note that these numbers are revenue-based: Linux tends to be cheaper so it is a bit biased. It also depends a lot on the market. Supercomputers and really heavy usecases rarely run MS - nobody is waiting for debacles like the London Stock Exchange (where MS solutions simply proved to be entirely incapable of the required performance and reliability, and SUSE Linux saved the day).
    • A big player

      IBM has not gone away, HP has not gone away, and I don't think Microsoft will go away. They may not be as dominate as they are now, but I am pretty sure they will be more than a bit player, barring major changes like quantum computing changing the game completely and wiping out all the current players.
    •'ll have to wait for Loverock Davidson to give you the answer

      now wont you? ........ and yet again he may even be gone by then.............
      Over and Out
    • LOL

      Yeah, right.
      Hallowed are the Ori
    • Who really thinks Windows will be a player in 2032?

      wow! Really? I am lost for words!
    • Never underestimate inertia

      Inertia is a powerful thing. Unless a version of Windows legitimately fails (as opposed to Vista, which was merely a disappointment), or some other product completely knocks it out of the park in every way possible, Microsoft will still be around for a long time.

      In light if that, I do find it funny how many Windows fanboys seem to vigorously defend Microsoft from any perceived threat, despite Microsoft 's nigh-invulnerability.
      Third of Five
  • Time to bring up the obvious

    Cloud computing is nothing but 1950s and 1960s mainframe and terminal computing. Sure, the hardware has changed some, and the connections are wireless, but it's topologically the same. RIP, PC revolution. Welcome, big iron.
    • Not really. 50s and 60s didnt really do any local processing

      And what happened on the mainframe stayed on the mainframe. Cloud is nothing like that. There's still tons of local processing that's not going anywhere. And what is going to the cloud is in many cases distributed over many machines working in parallel on subsets of the computing and sometimes wiht subsets of the data with each machine generating a portion of the result. Mainframes didnt really do geo diverse failover well either.
      Johnny Vegas
      • Nope

        50-60's mainframes were done in there, but 60-70's when Unix came, it (processing) was done on local computer as well, as it was time of personal computers.

        It was 1981 when IBM invented PC, and it changed things, as every personal computer was processing local data. So you networked them and transferred data only between them.

        But on Unix side, mainframes were holding data and gave background processing if user needed long time processing. But user could bring the processing for local computer and back to mainframe. User didn't need to know what computers did it, just that they allocated X amount of CPU time and mainframes allocated process to every personal computer if needed to speed things up and mainframes itself couldn't do it.

        You haven't clearly been using mainframes in 70's. As in universities, if your university processing speed was not enough, you could allocated from other university free time for your usage. Mainframes distributed the task, computed it and then brought results back and joined it. All without user knowing if wanted.

        The marketing for "Cloud" is now that users don't know where their files are, on what country or so on. THAT is the cloud, that normal WWW user can register and get 5GB storage from service X.

        In 70's, you contacted to needed mainframe, got a account and then you had space and processing time, even if you were behind 2800 bound modem somewhere on mobile.

        Instead then having a telnet and on for connection, we have now a WWW site in browser.
    • Exactly

      Yep, it is just mainframes sold again by new marketing. And instead selling them to just ISP's customers and bigger companies workers, they are now sold for smaller companies what then can easily allow others to create new accounts for their services.

      Microsoft is in big trouble, it is now under 40% share for servers, on supercomputers it has less than 4% share. On mainframes, non-existing.
  • At Some Point They Will Need More Than 26 Drive Letters...

    ...but Microsoft has yet to articulate a technology roadmap that will lead its customers to that point. They don't even have a plan for reusing the currently-unusable A: and B: letters, so you really only have 24 letters, not even 26.

    As for the idea of using actual names rather than cryptic letters--with Microsoft, that's not for this century, that's for the next one.
    • You are wrong

      Windows supports mounting volumes anywhere in the file system. You could add 27 drives to your system and mount them all under C:\mydrivesgohere\.
      • Re: Windows supports mounting volumes anywhere in the file system....

        But it doesn’t work right. Say a volume is mounted under a directory on C:. Then you try and install something which will go into a subdirectory on that volume. The installer checks for free space on C:, and if there isn’t enough there, the installation fails, even if there is enough space where it’s actually going to do the installation.
        • If you can't install in C:

          And I mean C:, in the Program Files folder where Windows wants it to be, then you really should get a bigger hard drive.

          I would personally never think of installing software on any other partition than the C:, not even on mounted folders.

          Anyway, easier than mounting folders (quite unadvertised function, unless you're a geek you've never heard of that) I much prefer going through folder sharing and accessing through \\PC-NAME\SHARE-NAME. This way you can have shortcuts, playlists and stuff like that that works not only on the local machine but on all machines across the network.

          By the way, is there a way to make Linux understand "\\PC-NAME\SHARE-NAME" directly instead of it having to use the smb:// prefix? That way I could share my playlists between Windows and Linux machines I have at home (or the other way around and make Windows understand the smb:// prefix).
          • Re: If you can't install in C:

            I rest my case.
    • You've been under a rock

      for 12 years?