.. _ch07-computer-architecture: ============================================================================================ Computer Architecture ============================================================================================ A lecture note on **Computer Architecture** by Prof. R. LeVeque (with permission): * :download:`pdf note <./am583lecture11.pdf>` * `Prof. LeVeque's class note `_ .. _ch07-scientific-computings-hpc-flash: ============================================================================================ Introduction to High-performance Computing (HPC) with the FLASH code ============================================================================================ A presentation of **High-performance computing, massively parallel simulations using the FLASH code** by Prof. Dongwook Lee: * :download:`pdf presentation <./slides/DongwookLee_ams209.pdf>` * `FLASH code website `_ .. _ch07-top500: ============================================================================================ Extreme Computings with Top 500 Supercomputers ============================================================================================ * `Top 500 `_ .. _ch07-moores-law: Moore's Law ----------------------------------- For many years computers kept getting faster primarily because the clock cycle was reduced and so the CPU was made faster. In 1965, Gordon Moore (co-founder of Intel) predicted that the transistor density (and hence the speed) of chips would double every 18 months for the forseeable future. This is know as **Moore's law** This proved remarkably accurate for more than 40 years, see the graphs at `[wikipedia-moores-law] `_. Note that doubling every 18 months means an increase by a factor of 4096 every 14 years. Unfortunately, the days of just waiting for a faster computer in order to do larger calculations has come to an end. Two primary considerations are: * The limit has nearly been reached of how densely transistors can be packed and how fast a single processor can be made. * Even current processors can generally do computations much more quickly than sufficient quantities of data can be moved between memory and the CPU. If you are doing 1 billion meaningful multiplies per second you need to move lots of data around. There is a hard limit imposed by the speed of light. A 2 GHz processor has a clock cycle of 0.5e-9 seconds. The speed of light is about 3e8 meters per second. So in one clock cycle information cannot possibly travel more than 0.15 meters. (A light year is a long distance but a light nanosecond is only about 1 foot.) If you're trying to move billions of bits of information each second then you have a problem. Another major problem is power consumption. Doubling the clock speed of a processor takes roughly 8 times as much power and also produces much more heat. By contrast, doubling the number of processors only takes twice as much power. There are ways to continue improving computing power in the future, but they must include two things: * Increasing the number of cores or CPUs that are being used simultaneously (i.e., parallel computing) * Using memory hierachies to improve the ability to have large amounts of data available to the CPU when it needs it. .. _ch07-other-talks: ============================================================================================ Other Resources and Presentations on HPC ============================================================================================ A list of `ANL Training Program on Extreme Scale Computing `_. * Marius Stan, `Argonne National Lab `_: * `Computational science and cinema `_ * Anshu Dubey, `Lawrence Berkeley National Lab `_ , now at `Argonne National Lab `_: * `Software engineering `_ * `Community code `_ * Sean Couch, `Physics and Astronomy, MSU `_: * `Impact of community codes on astrophysics `_ * Brian Van Straalen, `ANAG `_, `Lawrence Berkeley National Lab `_: * `Block structured AMR `_ * Peter Beckman, `Argonne National Lab `_: * `Exascale architecture trends `_ * Katherine Riley, `Argonne National Lab `_: * `Why are supercomputers hard to use? `_ * `Community codes and good software techinques `_ * Joe Insley, `Argonne National Lab `_: * `Visualization introduction `_ * William Scullin, `Argonne National Lab `_: * `Python for HPC `_ * David Lecomberm, `Allinear Software `_: * `Debugging and profiling `_ * Cyrus Harrison, `Lawrence Livermore National Lab `_: * `Visualization and analysis of massive data with VisIt `_ * David Keys, `Applied Math and Computational Science Program, KAUST `_: * `Algorithm adaptations to extreme scale `_