I'm taking a couple weeks off before the busiest part of Microsoft's 2012 kicks into full gear. But never fear: The Microsoft watching will go on while I'm gone. I've asked a few illustrious members of the Microsoft community to share their insights via guest posts on a variety of topics -- from Windows Phone, to Hyper-V. Today's entry is all about OData and is authored by Chris Woodruff. The secret to data in the future will not be with the repositories which contain the data (like Microsoft’s SQL Server, Oracle or even the popular NoSQL databases) but the way we transport that data back and forth from applications, online services, and the cloud.
With the newest version of the Open Data Protocol (OData), Microsoft is bringing a richer data experience for developers, information workers and data journalists to consume and analyze data from any source publishing with the OData protocol. The goal is not to hide your data and keep it locked away, but to curate the data you provide to your partners, customers and/or the general public. By allowing a curated data experience, you will generate more revenue and allow your data more widespread adoption.
To gain a clearer picture of how this new forum will work, it's key to understand what the Open Data Protocol is and where it originated. There's more information about OData at my 31 Days of OData blog series, but the official statement for Open Data Protocol (OData) is that it is a Web protocol for querying and updating data that provides a way to unlock your data and free it from silos that exist in applications today. Really what that means is that we can select, save, delete and update data from our applications just like we have been against relational SQL databases for years. The benefit is the ease of setting up the OData feeds to be utilized from libraries which Microsoft has created for us developers.
An additional benefit comes from the fact that OData has a standard that allows a clear understanding of the data due to the metadata from the feed. Behind the scenes, we send OData requests to a web server which has the OData feed through HTTP calls using the protocol for OData.
OData started back in 2007 at the second Microsoft Mix conference. The announcement was an incubation project codenamed Astoria. The purpose of Project Astoria was to find a way to transport data across HTTP in order to architect and develop web based solutions more efficiently. Not until after the project had time to incubate, did the OData team see patterns occurring which led them to see the vision of the Open Data Protocol. The next big milestone was the 2010 Microsoft Mix Conference where OData was officially announced and proclaimed to the world as a new way to handle data. The rest is history.
Recently, a third version of the OData protocol was announced which will allow developers to produce and consume data, not only to their own desktop applications, web sites and mobile apps, but also open their data up for solutions they may never have intended when creating the OData service, better known as a feed. The next version will include a number of new feature additions for both the server side which hosts the OData feeds, as well as the client side which developers will use to consume the data in their architected solutions.
Here are just a few of the new features:
- Vocabularies that convey more meaning and extra information to enable richer client experiences.
- Actions that provide a way to inject behaviors into an otherwise data-centric model without confusing the data aspects of the model.
- OData version 3 supports Geospatial data and comes with 16 new spatial primitives and some corresponding operations.
An example is my own Baseball Statistics OData feed located here and publicly open to anyone to consume the data. The feed contains the entire 138 years of statistics for Major League Baseball including team, player and post-season stats. My baseball statistics OData feed will be updated to OData v3 very soon and will use many of the new features that were recently announced.
There are many libraries to consume and understand OData for developers to use in their solutions. You can find many of the libraries for your mobile, web and CMS solutions at the OData home site here.
What about the business aspects of OData for organizations that have valuable data they wish to share and wish to generate revenue from? By having data that is easy to consume and understand organizations can allow their customers and partners (via the developers that build the solutions using one or more of the available OData libraries) to leverage the value of curated data that the organization owns. Business customers can either host the data they own and control the consumer experience and subsequent revenue collection, or they can set up your OData feed inside Microsoft’s Windows Azure Marketplace and have Microsoft do the heavy lifting for them, in terms of offering subscriptions to theirr data and collection of subscription fees.
Think of the Windows Azure Datamarket as an App store for data. It’s a great place to generate that needed revenue without having to create the infrastructure beyond the OData feed which surfaces your proprietary data.
In the end, maintaining valuable data in an organization should not solely consist of utilizing databases which are hidden from those outside corporate walls. The data should be curated and allowed to be consumed and even generate revenue for an organization. If you are a developer looking at either producing a method to get data to your applications, or you wish to consume the rich data you see others using in their applications, dig into OData. You will find that it is a great way to become an expert in Data Experience. Furthermore, if you are a manager who is looking for new ways to get your data to the public either for free or to generate additional revenue for your company, explore the exciting world of OData. You just might find some unexpected benefits waiting for you. Chris Woodruff (or Woody as he is commonly referred to) holds a degree in Computer Science from Michigan State University’s College of Engineering. Woody has been developing and architecting software solutions for almost 15 years and has worked in many different platforms and tools. As a speaker and podcaster, Woody has spoken and discussed a variety of topics, including database design and open source. He is a Microsoft Most Valuable Professional (MVP) in Data Platform Development. Woody works at Perficient, Inc. Woody is the co-host of the popular podcast “Deep Fried Bytes” and blogs at www.chriswoodruff.com.