High
Performance Computing
HPC (High Performance
Computing) is a technology that utilizes paralell processing as a step to
complete jobs that require massive computational performance efficiently,
accurately and quickly. The massive work will be completed in parallel using
the entire processor core and running according to the instruction set of the
algorithm in question.
According to the HPC Telkom,
computational science journal is a high-level computational technique that can
carry out a very heavy job using very high specifications, the following
high-performance computing is illustrated as follows:
Computing Burden
At present, heavy
computing which includes grand challenge problems includes:
Macro-scale environmental
simulation
Biomedical and
biomechanical imaging
Fluid dynamics simulation
Simulation of structural
deformation of rigid bodies
Molecular design and
simulation
Design, simulation, and
optimization of instrumentation and control systems
Artificial intelligence
Tabel 1.1 Skala waktu gerakan atomik
MOTION TIME SCALE (SEC)
Bond stretching 10-14 to
10-13
Elastic vibrations 10-12 to
10-11
Rotations of surface Sidechains 10-11
to 10-10
Hinge bending 10-11 to 10-7
Rotation of buried side chains 10-4
to 1 sec
Allosteric transistions 10-5 to 1
sec
Local denaturations 10-5 to 10
sec
It appears that the
shortest movement is bond stretching, with a time scale of 10-14 seconds. For
this movement alone, if you want to be simulated for 1 micro second (10-6
seconds) then iterations are needed 108 times. Meanwhile, a fairly complete
simulation could involve around 100,000 atoms. For each atom, its motion is calculated
by considering its interaction with 99,999 other atoms, but there is usually an
effective interaction radius, so that only about 10,000 neighboring atoms are
taken into account. In accordance with the detailed quantum mechanical formula
used, each interaction between atoms requires about 10 floating point
calculations. Because computers today are able to calculate up to 2 Giga Flops
(5 x 10-10 seconds per operation), the movement of 1 atom can be calculated in
about: 10,000 x 10 x 5 x 10-10 seconds = 5 x 10-5 seconds. Thus, the total
simulation will take time = 108 x 100,000 x 5 x 10-5 seconds = 5 x 108 seconds,
almost reaching 16 years!
High Performance
Computing Techniques
To shorten the time of
this calculation, there are two general ways, namely:
Make a faster processor.
Perform calculations in
parallel with many processors.
For the first way, the
processor's electronic path must be minimized so that the signal flows shorter
and smaller. Unfortunately, semiconductor manufacturing technology is currently
still using lithography techniques and is nearing its limit. For information,
the latest processor chip (Intel i7) has reached a scale of 45 nano meters. If
it is reduced again, the possibility of errors in the manufacturing process
increases so that the reliability decreases.
Thus, the hope of
accelerating computing that is still wide open is only by parallelism. In this
paradigm, the program algorithm must be broken down into several rows (threads)
that can be done simultaneously. Each lane will be worked by one processor, and
the final results will be collected again. Theoretically, if a load is able to
be completed by one processor in time T, then N processors will be able to
solve it in time T / N.
To realize parallel
computing, hardware support is needed that provides many processors, and also
the operating system to divide the computing load across the entire processor.
This system was not easy, so in the beginning, parallel computing could only be
enjoyed by expensive and large systems such as supercomputers. Fortunately,
with the development of time, parallel computing can be found on ordinary
computers.
Multiprocessor and
Multicore
In the PC market, the
initial implementation of parallelism was to install multiple CPUs in one
computer (multi-processor) (for example a computer with dual Pentium Pro). Now,
this is also achieved by increasing the number of processors in one CPU
(multi-core), starting from dual core and now quad core (for example Intel
Xeon, Intel i7, AMD Phenom). For multi-processor or multi-core computers,
modern operating systems provide multithreading, or symetric multi processing
(SMP) facilities. Linux has supported this from the start, while Windows has it
since Windows NT. To do parallel programming on this system, OpenMP can be
used.
Computer cluster
A more colossal way to
increase the number of processors is to build a cluster computer. In this
system several computers are connected through a network, so they can work
together to calculate computational load. One model is the Beowulf system,
which can be built from a normal computer and coordinated with the Linux
operating system. A Beowulf cluster can have up to 1024 nodes, so
theoretically, examples of previous molecular simulation cases can be
calculated in about 1 week. The drawback, this cluster system is large,
expensive and difficult to develop or maintain. For cluster systems, operating
systems usually have to be equipped with intermediate devices (middle ware)
that provide communication between processes through the network. Some
well-known intermediate devices are the Parallel Virtual Machine (PVM) or
Message Passing Interface (MPI).
GPGPU
Meanwhile, from another
path, developing Graphics Processing Unit (GPU) technology originally served to
help the CPU (as a co-processor) to speed up 3-dimensional (3D) display. GPU
has a special architecture for processing graphics pixels in parallel. Driven
by the needs of the KKT, the GPU is encouraged to become a General Purpose
Graphics Processing Unit (GPGPU) which can also do mathematical calculations in
general. Currently, the most advanced GPUs (for example NVIDIA GTX 285) have
240 cores, and can be combined up to 3 GPUs in one computer (3 way SLI). It is
clear that the GPU is a way to multiply cores that are cheaper and more compact
compared to cluster computers. Software support for the warmest GPU at the
moment is CUDA, but in the near future, it looks like OpenCL will appear.
Source :
Komentar
Posting Komentar