Architecture, Compilers, Parallel Computing And Systems Phd Qualifying Examination | Siebel School of Computing and Data Science

Final up to date: September 17, 2021
Previous ACPC&S Qual Website

All college students taking the qualifying examination on this space ought to subscribe to the architecture@cs.illinois.edu mailing checklist to make sure they obtain all qual associated bulletins from the realm.

This space has three completely different tracks: structure, compilers, and parallel programming. Every monitor has a core and a specialization studying checklist.


College students on this space might also select one of many tracks within the Programming Languages, Formal Systems and Software Engineering space for a non-specialization monitor.

Preparation for the examination

You might be requested to:

  • Learn the next literature: 1) the core and specialization checklist in your chosen monitor, and a couple of) the core checklist for 2 further tracks (one in every of these tracks could also be from the programming languages, formal strategies, and software program engineering space).
  • Put together a presentation of 1 paper (chosen by space college) from the core or specialization checklist of your chosen monitor. You could find the chosen paper within the corresponding monitor web page.

Papers chosen for Qual displays

Structure monitor: Norman P. Jouppi, et al. 2017. In-Datacenter Efficiency Evaluation of a Tensor Processing Unit. In Proceedings of the forty fourth Annual Worldwide Symposium on Laptop Structure (ISCA ’17). 1–12.

Compilers monitor

Trace-based Just-in-Time Type Specialization for Dynamic Languages“, In Proceedings of the ACM SIGPLAN 2009 Convention on Programming Language Design and Implementation (PLDI). June 2009.

Parallel Programming monitor: Milind Kulkarni, Martin Burtscher, Rajasekhar Inkulu, Keshav Pingali and Calin Cascaval. “How Much Parallelism is There in Irregular Applications?” Principles and Practices of Parallel Programming (PPoPP), February, 2009. 

Examination course of

The examination is oral and lasts 90 minutes. You’ll begin by presenting the paper that was chosen. The college in your committee will ask you about the entire studying checklist, not essentially particularly in regards to the paper you current.

Structure Observe

PhD qualifying examination studying checklist

CORE (5):

  1. B. Sinharoy, R. Kalla, W. J. Starke, H. Q. Le, R. Cargnoni, J. A. Van Norstrand, B. J. Ronchetti, J. Stuecheli, J. Leenstra, G. L. Guthrie, D. Q. Nguyen, B. Blaner, C. F. Marino, E. Retter, and P. Williams. 2011. IBM POWER7 multicore server processor. IBM J. Res. Dev. 55, 3 (Could 2011), 191-219.
  2. Brightwell, R.; Predretti, Okay.T.; Underwood, Okay.D.; Hudson, T.; “SeaStar Interconnect: Balanced Bandwidth for Scalable Efficiency,” IEEE Micro vol.26, no.3, pp.41-57, Could-June 2006.
  3. Stephen W Keckler, William J Dally, Brucek Khailany, Michael Garland, David Glasco. GPUs and the way forward for parallel computing. IEEE Micro, Quantity 31, Problem 5, Pages 7-17.
  4. Norman P. Jouppi, et al. 2017. In-Datacenter Efficiency Evaluation of a Tensor Processing Unit. In Proceedings of the forty fourth Annual Worldwide Symposium on Laptop Structure (ISCA ’17). 1–12.
  5. Daniel Lenoski, James Laudon, Kourosh Gharachorloo, Anoop Gupta, John Hennessy. The directory-based cache coherence protocol for the DASH multiprocessor. Worldwide Symposium on Laptop Structure (ISCA), 1990.

SPECIALIZATION (15):

  1. Ikuo Magaki, Moein Khazraee, Luis Vega Gutierrez, and Michael Bedford Taylor. “ASIC Clouds: Specializing the Datacenter” ISCA’16
  2. Yu-Hsin Chen, Joel Emer, and Vivienne Sze. “Eyeriss: A Spatial Structure for Vitality-Environment friendly Dataflow for Convolutional Neural Networks” ISCA’16
  3. Arthur Perais, André Seznec, “EOLE: Paving the way in which for an efficient implementation of worth prediction,” 2014 ACM/IEEE forty first Worldwide Symposium on Laptop Structure (ISCA), Minneapolis, MN, 2014, pp. 481-492
  4. J. Gregory Steffan and Todd C. Mowry. The Potential for Utilizing Thread-Stage Information Hypothesis to Facilitate Computerized Parallelization. In Proceedings of the Fourth Worldwide Symposium on Excessive-Efficiency Laptop Structure, pages 2-13, February, 1998.
  5. Ling Ren, Xiangyao Yu, Christopher W. Fletcher, Marten van Dijk, Srinivas Devadas. “Design Area Exploration and Optimization of Path Oblivious RAM in Safe Processors” ISCA’13
  6. Hadi Esmaeilzadeh, Emily Blem, Renée St. Amant, Karthikeyan Sankaralingam, Doug Burger. “Darkish Silicon and the Finish of Multicore Scaling” ISCA’11
  7. Nathan Binkert, Al Davis, Norman P. Jouppi, Moray McLaren, Naveen Muralimanohar, Robert Schreiber, and Jung Ho Ahn. 2011. “The function of optics in future excessive radix change design.” InProceedings of the thirty eighth annual worldwide symposium on Laptop structure (ISCA ’11). ACM, New York, NY, USA, 437-448. PDF.
  8. Mohit Tiwari, Hassan M G Wassel, Bita Mazloom, Shashidhar Mysore, Frederic T Chong, Timothy Sherwood. “Full Info Movement Monitoring from the Gates Up” ASPLOS’09
  9. Kevin Lim, Parthasarathy Ranganathan, Jichuan Chang, Chandrakant Patel, Trevor Mudge, Steven Reinhardt. “Understanding and Designing New Server Architectures for Rising Warehouse-Computing Environments” ISCA’08.
  10. Gabriel H. Loh. “3D-Stacked Reminiscence Architectures for Multi-core Processors.” ISCA’08.
  11. Luis Ceze, James M. Tuck, Pablo Montesinos, and Josep Torrellas. “BulkSC: Bulk Enforcement of Sequential Consistency.” ISCA’07.
  12. Shekhar Borkar. “Designing dependable techniques from unreliable elements: the challenges of transistor variability and degradation” IEEE Micro’05
  13. S. Adve and Okay. Gharachorloo. “Shared Reminiscence Consistency Fashions: A Tutorial” IEEE Laptop, 1996.
  14. Moinuddin Qureshi, Viji Srinivasan, and Jude A. Rivers. “Scalable Excessive-Efficiency Principal Reminiscence System Utilizing Section-Change Reminiscence Know-how.” Worldwide Symposium on Laptop Structure (ISCA) 2009.
  15. D. Ernst, N. S. Kim, S. Das, S. Pant, R. Rao, T. Pham, C. Zeisler, D. Blaauw, T. Austin, Okay. Flautner, and T. Mudge. “Razor: A lowpower pipeline primarily based on circuit-level timing hypothesis. In Worldwide Symposium on Microarchitecture” December 2003.

Compilers Observe

PhD qualifying examination

PhD qualifying examination

Under is the studying checklist for the compiler monitor. College students taking the Breadth examination in Compilers are required to take CS 426 (or equal). College students taking the Depth examination in Compilers are required to take CS 426 (or equal) and CS 526.

All ACM convention and journal papers can be found free to UIUC college students and employees from the ACM Digital Library. To entry the ACM Digital Library from an off-campus pc, please use the College’s library proxy service:

(http://dl.acm.org.proxy2.library.illinois.edu/dl.cfm?coll=portal&dl=ACM)

For a number of the subjects, we advocate studying one or a number of guide chapters as an alternative of a paper; for others, we advise that along with the paper on this checklist you discuss with a guide chapter or part, the place you may discover a extra clear clarification.

Under is the checklist of books that you simply may want.

ON RESERVE at GRAINGER LIBRARY: The Dragon E-book and Keith Cooper’s guide “Engineering a Compiler” are on reserve in Grainger Library for CS 426 and CS 526.

Alfred V. Aho, Monica S. Lam, Ravi Sethi, and Jeffrey D. Ullman. “Compilers: Ideas, Strategies, and Instruments”. (The Dragon guide!) Revealed by Pearson, Addison-Wesley. Second Version, 2007.

Keith Cooper and Linda Torczon. “Engineering a Compiler”. Revealed by Morgan Kaufman, Second Version, 2011.

 

CORE
  1. Aho, Lam, Sethi and Ullmann – Chapters 8.4, 9
  2. Michael E. Wolf and Monica S. Lam. “A Data Locality Optimizing Algorithm,” ACM SIGPLAN Conference on Programming Languages Design and Implementation, June 1991, pp. 30-44.
  3. A. Milanova, A. Rountev, and B. Ryder. “Parameterized Object Sensitivity for Points-To Analysis for Java”, ACM Transactions on Software Engineering Methodology, 14(1), pp. 1–41, 2005.
  4. B.Ramakrishna Rau, Michael S. Schlansker, P. P. Tirumalai. “Code Generation Schema for Modulo Scheduled Loops”, In Proceedings of the 25th Annual International Symposium on Microarchitecture(MICRO 25), 1992.
SPECIALIZATION
Inner Group
  1. Marc Auslander and Martin Hopkins. “An Overview of the PL.8 Compiler,” Proceedings of the 1982 SIGPLAN Symposium on Compiler Construction, Boston, MA pp. 22-31, 1982.
  2. Chris Lattner and Vikram Adve. ” LLVM: A Compilation Framework for Lifelong Program Analysis & Transformation”, In Proceedings of the international Symposium on Code Generation and Optimization (CGO ’04), 2004.
Information Movement Evaluation
  1. J. B. Kam and J. D. Ullman. “International Information Movement Evaluation and Iterative Algorithms”, Journal of the ACM, 23(1), pp. 158- 171, Jan. 1976.|https://docs.google.com/viewer? url=http://rsim.cs.illinois.edu/arch/qual_papers/compilers/kam.pdf&embedded=true] This paper covers basic materials within the space. For added clarification of this materials, discuss with the Dragon guide: Aho, Lam, Sethi and Ullmann – Chapter 9 (See reserve checklist above)
  2. Lazy Code Movement. Discovered within the Dragon guide , Part 9.5, or Keith Cooper and Linda Torczon. “Engineering a Compiler”, Part 10.3.2, Revealed by Morgan Kaufman, Second Version, 2011. (see reserve checklist above)
SSA
  1. Ron Cytron, Jeanne Ferrante, Barry K. Rosen, Mark N. Wegman, and F. Kenneth Zadeck. “Efficiently Computing Static Single Assignment Form and the Control Dependence Graph”,ACM Transactions on Programming Languages and Systems, 13 (4), pp. 451-490, October 1991.
Interprocedural Evaluation
  1. “Program Analysis via Graph Reachability”, Thomas Reps, Invited paper, InProceedings of the 1997 Internationl Symposium on Logic Programming, Oct. 1997.
Pointer evaluation
  1. Ben Hardekopf, Calvin Lin: “The Ant and the Grasshopper: Fast and Accurate Pointer Analysis for Millions of Lines of Code”, InProceedings of the ACM SIGPLAN 2007 Conference on Programming Language Design and Implementation (PLDI), June 2007.
Vectorization
  1. Sam Larsen and Saman Amarasinghe. “Exploiting Superword Level Parallelism with Multimedia Instruction Sets”, InProceedings of the ACM SIGPLAN 2000 Conference on Programming Language Design and Implementation (PLDI),June 2000.
Program Synthesis
  1. Matteo Frigo. “A Fast Fourier Transform Compiler”, InProceedings of the ACM SIGPLAN 1999 conference on Programming language design and implementation (PLDI), June 1999.
  2. Kamen Yotov, Xiaoming Li, Gang Ren, Michael Cibulskis, Gerald DeJong, María Jesús Garzarán, David Padua, Keshav Pingali, Paul Stodghill, and Peng Wu. “A Comparison of Empirical and Model-Driven Optimization”, InProceedings of the ACM SIGPLAN 2003 Conference on Programming Language Design and Implementation (PLDI), June 2003.
Dynamic Evaluation
  1. Chi-Keung Luk, Robert Cohn, Robert Muth, Harish Patil, Artur Klauser, Geoff Lowney, Steven Wallace, Vijay Janapa Reddi, and Kim Hazelwood. “Pin: Building Customized Program Analysis Tools with Dynamic Instrumentation”, InProceedings of the ACM SIGPLAN 2005 Conference on Programming Language Design and Implementation (PLDI), June 2005.
  2. Andreas Gal, Brendan Eich∗ , Mike Shaver, David Anderson, David Mandelin, Mohammad R. Haghighat, Blake Kaplan, Graydon Hoare, Boris Zbarsky, Jason Orendorff, Jesse Ruderman , Edwin Smith, Rick Reitmaier, Michael Bebenita, Mason Chang, Michael Franz. “Trace-based Just-in-Time Type Specialization for Dynamic Languages”, InProceedings of the ACM SIGPLAN 2009 Conference on Programming Language Design and Implementation (PLDI). June 2009.
Native Code Era
  1. Preston Briggs, Keith Cooper, and Linda Torczon. “Improvements to Graph Coloring Register Allocation,” ACM Transactions on Programming Languages and Systems, 16(3), pp. 428-455, May 1994.
  2. Sorav Bansal, Alex Aiken. “Automatic Generation of Peephole Superoptimizers”, InProceedings of the 12th International Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS), October 2006.
Correctness
  1. Thomas Ball, Rupak Majumdar, Todd Millstein, and Sriram K. Rajamani. “Automatic Predicate Abstraction of C Programs”, InProceedings of the ACM SIGPLAN 2001 Conference on Programming Language Design and Implementation (PLDI). June 2001.
  2. Yichen Xie and Alex Aiken. “Saturn: A Scalable Framework for Error Detection Using Boolean Satisfiability”,ACM Transactions on Programming Languages and Systems, 29 (3), May 2007.
  3. Rastislav Bodik, Rajiv Gupta and Vivek Sarkar. “ABCD: Eliminating Array Bounds Checks on Demand”, InProceedings of the ACM SIGPLAN 2001 Conference on Programming Language Design and Implementation (PLDI), June 2000.

Parallel Programming Observe

PhD qualifying examination

Core
  1. A.Y. Grama, A. Gupta, and V. Kumar. “Isoefficiency: Measuring the Scalability of Parallel Algorithms and Architectures,” Parallel & Distributed Technology: Systems & Applications, IEEE, see also IEEE Concurrency, Volume 1, Issue 3, Aug. 1993, pp. 12-21.
  2. M.S. Warren and J.K. Salmon. “A Parallel Hashed Oct-Tree N-body Algorithm,” Proceedings of the 1993 ACM/IEEE conference on Supercomputing, pp. 12-21.
  3. W. Daniel Hillis and Guy L. Steele. “Data Parallel Algorithms,” Communications of the ACM, December 1986, pp. 1170- 1183.

Specialization

Algorithms: Key Ideas
  1. David E. Culler, Richard M. Karp, David Patterson, Abhijit Sahay, Eunice E. Santos, Klaus Erik Schauser, Ramesh Subramonian, Thorsten von Eicken. “LogP: A Practical Model of Parallel Computation,” Communications of the ACM, Volume 39, Issue 11, November 1996, pp. 78 – 85.
  2. J.L. Gustafson, G.R. Montry, R.E. Benner. “Development of Parallel Methods for a 1024-processor Hypercube,” SIAM J. Sci. Stat. Comput, 9(4), pp. 609-638.
  3. R.M. Karp and Y. Zhang. “Randomized Parallel Algorithms for Backtrack Search and Branch-and-bound Computation,” Journal of the ACM, 40(3), July 1993, pp. 765-789.
Particular Algorithms
  1. A. Gupta, G. Karypis, and V. Kumar. “Highly Scalable Parallel Algorithms for Sparse Matrix Factorization,” Parallel and Distributed Systems, IEEE Transactions on Volume 8, Issue 5, May 1997, pp. 502-520.
  2. K. Thearling and S. Smith. “An Improved Supercomputer Sorting Benchmark,” Conference on High Performance Networking and Computing, Proceedings of the 1992 ACM/IEEE conference on Supercomputing, pp. 14-19, 1992.
  3. L.V. Kale and Sanjeev Krishnan. “A Comparison Based Parallel Sorting Algorithm,” International Conference on Parallel Processing, August 1993, pp. 196-200.
  4. R. C. Agarwal, S. M. Balle, F. G. Gustavson, M. Joshi, and P. Palkar. “A Three-dimensional Approach to Parallel Matrix Multiplication,” IBM Journal of Research and Development, Volume 39, Number 5, 1995.
State-space Search and Discrete Occasion Simulation
  1. V. Nageshwara Rao and Vipin Kumar. “Superlinear Speedup in Parallel State-Space Search,” Lecture Notes In Computer Science, Vol. 338, Proceedings of the Eighth Conference on Foundations of Software Technology and Theoretical Computer Science, pp. 161-174, 1988.
  2. L.V. Kale, B. Ramkumar, V. Saletore, and A.B. Sinha. “Prioritization in Parallel Symbolic Computing,” Lecture Notes in Computer Science, Vol. 748, pp. 12-41, 1993.
  3. Richard M. Fujimoto. “Parallel Discrete Event Simulation,” Communications of the ACM, Vol. 33, Issue 10, pp. 30-53, 1990.
Benchmarks and Efficiency
  1. Fabrizio Petrini, Darren J. Kerbyson, Scott Pakin. “The Case of the Missing Supercomputer Performance: Achieving Optimal Performance on the 8,192 Processors of ASCI Q,” Conference on High Performance Networking and Computing, Proceedings of the 2003 ACM/IEEE Conference on Supercomputing, 2003.
Runtime Points
  1. V. Bala, J. Bruck, R. Cypher, P. Elustondo, A. Ho, Ching-Tien Ho, S. Kipnis, and M. Snir. “CCL: A Portable and Tunable Collective Communication Library for Scalable Parallel Computers,” IEEE Transactions on Parallel and Distributed Systems, Volume 6, Issue 2, Feb. 1995, pp. 154 – 164.
Parallel Architectures
  1. A. Gara et. al. “Overview of the Blue Gene/L System Architecture,” IBM Journal of Research and Development, Vol. 49, Number 2/3, March/May 2005, pp. 195-212.
  2. William J. Dally and Hiromichi Aoki. “Deadlock-free Adaptive Routing in Multicomputer Networks Using Virtual Channels,” IEEE Transactions on Parallel and Distributed Systems, 4(4), pp. 466-475, April 1993.
  3. Charles E. Leiserson. “Fat-trees: Universal Networks for Hardware-efficient Supercomputing,” IEEE Transactions on Computers, Vol. 34, No. 10, pp. 92-901, Oct. 1985.
New Papers
  1. Milind Kulkarni, Martin Burtscher, Rajasekhar Inkulu, Keshav Pingali and Calin Cascaval. “How Much Parallelism is There in Irregular Applications?” Principles and Practices of Parallel Programming (PPoPP), February, 2009.
  2. N. S. Arora, R. D. Blumofe and C. G. Plaxton. “Thread Scheduling for Multiprogrammed Processors,” Theory of Computing Systems 34, 115-144 (2001).

Sensi Tech Hub
Logo