When Michael Tomkins started as chief technology officer of Fox Sports at the end of 2010, there hadn't been a refresh on disk hardware in eight years.
A typical live, high-definition broadcast produced and broadcasted by Fox Sports requires 50GB of available storage per hour. Because there are a number of different sports feeds, and the footage needs to be transcoded to create different versions (for mobile platforms, for example), this can mean that up to 30TB of storage will be needed each weekend.
Fox Sports' existing spinning disk was years old, Tomkins said, and didn't have the capacity to deal with the company's requirements. This meant that as soon as content was transferred to the storage system ("ingested", in broadcast speak), it had to be moved to an LTO (linear tape-open) robot, and retrieved when needed. But moving the data between this archive and online production systems created an inefficient layer of network traffic. The timeframe from ingesting a football game to having footage available on the website was 38 minutes, which was too long, in Tomkins' opinion.
Sports are all about a real-time experience, Tomkins said. Users who are out and about, but still connected via tablets or phones, want to see game highlights while the game is still on.
"Nobody wants to watch an old game," he said. "If you're getting [the highlights] in 38 minutes, the game's probably over."
He also wanted it to be easy for the company's staff to be able to access historical footage for their coverage, and to reduce the number of system failures, which had been rising due to the age of the equipment.
(Screenshot by ZDNet Australia)
To tackle these problems, Tomkins started a refresh program.
"I formed a thing named Project Kate to revitalise the digital core," he said.
He came up with a series of 40 to 50 workflows that he thought he could optimise, and took those to Vizrt, which provided the Viz Ardome storage core that the company was using. However, Tomkins had the impression that Vizrt wasn't keen on changing the way it did things, so he looked at IBM's General Parallel File System (GPFS), which he considered to be a strong, but very complex, product. He also looked at SGI's and Hitachi Data Systems' storage systems.
In the end, however, he was won over by EMC's Isilon because of its simplicity. He also asked Grass Valley, a systems integrator, to look into what the company should purchase, and it arrived at the same conclusion.
"The beauty of the Isilon is it's like Lego blocks," Tomkins said. It is difficult to add more storage with a Storage Area Network, but if he wants to add more storage with Isilon, it simply requires adding another unit.
Once the decision was made, he placed initial orders for Isilon NL and X400 nodes during Christmas last year.
At first, when Tomkins implemented the new equipment, he experienced some throughput issues with the EVS system that the company was using to create the video files. However, Isilon told Tomkins that the throughput problems were a Windows issue, and gave him some options to help with the problem until Tomkins' team was able to fix it. This impressed Tomkins, because he had expected the vendor to tell him that it wasn't an Isilon problem, and to leave the Fox Sports team to grapple with the issue. Aside from this hiccup, implementation went smoothly, Tomkins said, with the time from ingesting a game to its being available for use on digital sites being cut down to three minutes.
With the new Isilon nodes, Fox Sports currently has about 1PB of disk storage. Tomkins would like to increase this to 2.4PB, because then he would be able to store a whole year's worth of games for each sport. At the beginning of the season, commentators would be able to access content from last season's games, and, as the season progressed, they would be able to use highlights from previous games.
Future storage strategy
Fox Sports is also opening a new site at St Leonards, in Sydney, giving Tomkins a chance to lay out the company's storage network exactly as he wants. Fox Sports had been using HP, SGI and IBM storage for different sections of the business. The above-described Isilon implementation replaced the SGI gear and some of the HP gear. Tomkins had to decide what gear he wanted for the new site.
He said that if he decided to stick with HP, he would have to separately source each component of his storage requirements: the blade server, the switch, the storage area network. It was too much work, especially considering what the engineers would have to do to set up the components when they arrived. It was for this reason that he decided to go with VCE's VBlock.
VCE is a partnership between VMware, EMC and Cisco. Together, they have created VBlock, a platform comprising EMC storage, Cisco blades and switches and VMware vSphere for virtualisation. Customers buy the platform as a whole, and don't need to worry about figuring out requirements for each component or fitting them together and formatting them once purchased.
"It's basically tick and flick," Tomkins said, adding that not having to worry about building storage infrastructure himself meant a big change for his team of 40.
He experienced some resistance from his team when adopting the plug-and-play product, he said, with his engineers worried that they would no longer be experts in their field.
"If it's got lots of flashing lights and they get to assemble it, engineers think it's great," he said.
"We stay with systems ... partially because it's what we know, and partially because we're worried about our jobs."
Tomkins also said that he had to change his employees' mindsets; to get them looking at the system from end to end to find improvements, instead of just keeping an eye on the one device that they are in charge of.
"I'd rather have my engineers working on how they can get picture better to screen," he said. "We're about moving pictures and customer experience."