The second day of the PPoPP 2009 conference started with a controversial keynote by Yale Patt from the University of Texas at Austin. It was so controversial, for me at least, that I decided to wait a few days to calm down before writing about it. He had three main points: multi-core is not the holy grail, most programmers are stupid, and there should be lots of low-level interfaces for the non-stupid ones to use.
[ Read: All articles on PPoPP 2009 ]
Ok, he didn't actually use the word "stupid" but he did suggest that the purpose of higher level abstractions and parallel languages is to protect dumb programmers from themselves. He wants to see most of the effort going into direct programming hooks exposed for each layer of the hardware and software, and he believes that everyone who is "capable" of programming at that level should be doing so.
After the session I went up to Dr. Patt to challenge some of his assertions...
I argued that enterprise developers aren't stupid just because we want to use a few high level languages -- that it's just a matter of practicality. We have to target the hardware our customers have, I said, and we can't afford to write something 40 different ways to handle 40 different hardware configurations. His response: "I don't care about that".
How can you not care about the real world problems that millions of people face? About the software that runs our banking, retail, health care, government, and other important sectors? As I looked around the conference, I noticed there was almost no ISV presence there at all. Lots of universities, a couple of research labs, a couple of national labs, and that was about it. Where was the practicality, the applications, the pragmatism? As a long-time ACM member I was expecting more.
Despite all this, Patt did have some interesting things to say. Is thinking in parallel hard? "Perhaps thinking is hard", he responded. He's also a big fan of trying to increase IPC (instructions per clock) and keeping a few big cores around for sequential or critical sections. Beefy, special purpose units should be powered up and used when needed, then kept on the sidelines and turned off when not needed.
After the keynote there was a session on accelerator software chaired by Vivek Sarkar. Papers included:
- OpenMP to GPGPU: A Compiler Framework for Automatic Translation and Optimization, Seyong Lee, Seung-Jai Min, Rudolf Eigenmann
- Comparability Graph Coloring for Optimizing Utilization of Stream Register Files in Stream Processors, Xuejun Yang, Li Wang, Jingling Xue, Yu Deng, Ying Zhang
- Solving dense linear systems on platforms with multiple hardware accelerators, Gregorio Quintana-Orti, Francisco D. Igual, Enrique S. Quintana-Orti, Robert A. van de Geijn
- A Comparison of Programming Models for Multiprocessors with Explicitly Managed Memory Hierarchies, Scott Schneider, Jae-Seung Yeom, Benjamin Rose, John C. Linford, Adrian Sandu, Dimitrios S. Nikolopoulos
This was followed by a session on Atomicity and Races, chaired by Tatiana Shpeisman. It was all about software transactional memory, which seemed a little too far "out there" even for me so I went next door to the HPCA-15 sessions on new memory architectures. Lots of interesting work is being done by Intel, IBM, and other names big and small to boost memory capacity, decrease latency, make non-uniform cache architecture (NUCA) work better, and use less power with technologies like MRAM, PCM, cache migration, 3D stacking, and on-chip tree networks.
The day ended with a joint PPpPP/HPCA panel called "Industrial Perspectives". Unfortunately this turned out to be more of a marketing presentation by NVidia, Sun, IBM, and Microsoft than anything else. I much preferred the format of the first day's panel, where most of the time was spent answering thought-provoking questions from the audience.