The 14th ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming (PPoPP 2009) and the 15th International Symposium on High-Performance Computer Architecture (HPCA-15) opened Monday in Raleigh, NC. Reflecting the growing interdependence of software and hardware the two conferences are being held at the same time in the same place. Attendees are encouraged to mingle and attend each other's sessions.
[ Read: All articles on PPoPP 2009 ]
Several colleagues and I are attending PPoPP because we're interested in applying parallel techniques to enterprise programming. As I've written before, the reality is that the days of sequential programming like they (still) teach in school are numbered. You may already be using a dual-core CPU right now, and you may be reading this with the help of a graphics card with 100 or more special purpose processors. That's just the tip of the iceberg.
In customer engagements, we're seeing that 4- and 8-core servers and workstations are common, with some big companies using 144-core SMP server machines or clusters containing hundreds if not thousands of cores. In the coming years that is only going to get bigger, with 10K, 100K, and eventually 1 million core computers on the horizon. These two conferences are part of a continuing attempt of the industry and in academia to wrap our head around this new reality.
Day 1 of PPoPP 2009 started with a keynote presentation by Guy Blelloch called "Parallel Thinking" which really set the tone for the whole day. His focus was on what we're teaching new developers and architects in school. Right now, the focus is on sequential von-Neuman designs, which is a bit like teaching your kids how to make vinyl records in an era of CDs and digital downloads. Instead, he proposes to teach parallel programming first and foremost, with sequential programming being a special case of that. This doesn't mean it has to be hard; Dr. Blelloch pointed out that introductory algorithms like Quicksort are actually easier to explain if you think of it in terms of high level parallel and recursive operations.
Other papers presented during the day included:
- How Much Parallelism is There in Irregular Applications?, by Milind Kulkarni, Martin Burtscher, R. Inkulu, Keshav Pingali, Calin Cascaval
- An Efficient Transactional Memory Algorithm for Computing Minimum Spanning Forest of Sparse Graphs, by Seunghwa Kang, David Bader
- Atomic Quake: Using Transactional Memory in an Interactive Multiplayer Game Server, by
Ferad Zyulkyarov, Vladimir Gajinov, Osman Unsal, Adrian Cristal, Eduard Ayguade, Tim Harris, Mateo Valero
- Application-Aware Management of Parallel Simulation Collections, by Siu Man Yau, Kostadin Damevski, Vijay Karamcheti, Steven G. Parker, Denis Zorin
- Idempotent Work Stealing, by Maged Michael, Martin Vechev, Vijay Saraswat
Backtracking-based Load Balancing, by Tasuku Hiraishi, Masahiro Yasugi, Seiji Umatani, Taiichi Yuasa
- Efficient and Scalable Multiprocessor Fair Scheduling Using Distributed Weighted Round-Robin, by Tong Li, Dan Baumberger, Scott Hahn
- Mapping Parallelism to Multi-cores: A Machine Learning Based Approach, by Zheng Wang, Michael F.P. O'Boyle
- Serialization Sets: A Dynamic Dependence-Based Parallel Execution Model, by Matthew Allen, Srinath Sridharan, Gurindar Sohi
Unfortunately several of the authors of the papers presented could not give their talks in person, because the US Government was keeping them out of the country due to Visa issues. One engineer who was denied entry related that his experience "had made the decision whether or not to stay in Europe easy".
For me the highlight of the day was the panel at the end, called Opportunities Beyond Single-Core Microprocessors, moderated by Mark Hill. Panelists from hardware and software disciplines each had 4 minutes to present their views, and then the rest of the session was Q&A from the audience. It's clear that everybody in the room recognized the challenges ahead, but there were many different ideas about what to do about it. It will be solved, though, because it has to. As one panelist put it, "We'll know we've succeeded when we don't call them parallel computers anymore, we just call them computers".