How will Microsoft spend its multicore millions?

In March, Microsoft's External Research Team put out a request for proposal (RFP) for three-year research projects in multicore computing. On July 28, the opening day of its annual Research Faculty Summit, Microsoft announced how and where it will be spending its grant money.

In March, Microsoft's External Research Team put out a request for proposal (RFP) for three-year research projects in multicore computing. On July 28, the opening day of its annual Research Faculty Summit, Microsoft announced how and where it will be spending its grant money.

Seven academic research projects will share the $1.5 million Microsoft allocated for the Safe and Scalable Multicore Computing RFP. According to Microsoft, this RFP is designed to "stimulate and enable bold, substantial research in multicore software that rethinks the relationships among computer architecture, operating systems, runtimes, compilers and applications."

Microsoft, like many tech leaders, is investing substantial time and money of its own to try to help ease the transition to multicore/manycore computing with various parallel-processing advances. At this week's Research Faculty Summit, Microsoft's Parallel Computing Platform team is set to present on some of this work, including the Parallel Extensions to the .Net Framework and Parallel Language Integrated Query (PLINQ). Representatives from the Microsoft-Intel Universal Parallel Computing Research Centers also are set to present their research agendas at the conference.

Where is Microsoft investing outside the Redmond walls on the multicore front? Here are the projects that are being funded under the aforementioned multicore RFP:

Sensible Transactional Memory via Dynamic Public or Private Memory, Dan Grossman, University of Washington: "Integrating transactions into the design and implementation of modern programming languages is surprisingly difficult. The broad goal of this research is to remove such difficulties via work in language semantics, compilers, runtime systems and performance evaluation."

Supporting Scalable Multicore Systems Through Runtime Adaptation, Kim Hazelwood, University of Virginia: "The Paradox Compiler Project aims to develop the means to build scalable software that executes efficiently on multicore and manycore systems via a unique combination of static analyses and compiler-inserted hints and speculation, combined with dynamic, runtime adaptation. This research will focus on the Runtime Adaptation portion of the Paradox system."

Language and Runtime Support for Safe and Scalable Programs, Antony Hosking, Jan Vitek, Suresh Jagannathan and Ananth Grama, Purdue University: "Expressing and managing concurrency at each layer of the software stack, with support across layers, as necessary, to reduce programmer effort in developing safe applications while ensuring scalable performance is a critical challenge. This team will develop novel constructs that fundamentally enhance the performance and programmability of applications using transaction-based approaches."

Geospatial-based Resource Modeling and Management in Multi- and Manycore Era, Tao Li, University of Florida: "To ensure that multicore performance will scale with the increasing number of cores, innovative processor architectures (e.g., distributed shared caches, on-chip networks) are increasingly being deployed in the hardware design. This team will explore novel techniques for geospatial-based on-chip resource utilization analysis, management and optimization."

Reliable and Efficient Concurrent Object-Oriented Programs (RECOOP), Bertrand Meyer, ETH Zurich, Switzerland: "The goal of this project, starting with the simple concurrent object-oriented programming (SCOOP) model of concurrent computation, is to develop a practical formal semantics and proof mechanism, enabling programmers to reason abstractly about concurrent programs and allowing proofs of formal properties of these programs." Runtime Packaging of Fine-Grained Parallelism and Locality, David Penry, Brigham Young University: "Scalable multicore environments will require the exploitation of fine-grained parallelism to achieve superior performance.... Current packaging algorithms suffer from a number of limitations. These researchers will develop new packaging algorithms that can take into account both parallelism and locality, are aware of critical sections, can be rerun as the runtime environment changes, can incorporate runtime feedback, and are highly scalable."

Multicore-Optimal Divide-and-Conquer Programming, Paul Hudak, Yale University: "Divide and conquer is a natural, expressive and efficient model for specifying parallel algorithms. This team cast divide and conquer as an algebraic functional form, called DC, much like the more popular map, reduce and scan functional forms. As such, DC subsumes the more popular forms, and its modularity permits application to a variety of problems and architectural details."

Newsletters

You have been successfully signed up. To sign up for more newsletters or to manage your account, visit the Newsletter Subscription Center.
See All
See All