The Research Institute for Information Technology at Kyushu University in Fukuoka, Japan, has announced it will be receiving a new supercomputer system in October, which will be used by universities around the country to advance research in areas such as artificial intelligence (AI).
The university will use the new system from Fujitsu as a computational resource for the JHPCN Joint Usage/Research Center for Interdisciplinary Large-scale Information Infrastructures, which is a network of joint use locations made up of supercomputer facilities at Hokkaido University, Tohoku University, the University of Tokyo, Tokyo Institute of Technology, Nagoya University, Kyoto University, Osaka University, and Kyushu University, with the Information Technology Center at the University of Tokyo serving as the core location.
It will also be used by the high performance computing infrastructure (HPCI), a computing environment that connects the K computer and major supercomputers located in universities and laboratories across Japan.
Making the new supercomputer available to users both inside and outside of the university is expected to enhance the platform for academic research in Japan and contribute to the development of new academic research in areas such as AI.
The server system of the new supercomputer system from Fujitsu will consist primarily of a back-end subsystem, a front-end subsystem, and a storage subsystem.
In the back-end, the computational nodes will be made up of 2,128 Primergy CX400 systems, equipped with Intel Skylake Xeon processors, and boasting 433 terabytes of total memory capacity. 128 of the x86 servers will be each equipped with four Nvidia Tesla P100 GPU computing cards.
Front-end subsystem will comprise 160 basic front-end nodes featuring Intel Skylake Xeon processors and Nvidia Quadro P4000 graphics cards, as well as four high-capacity front-end nodes featuring 12 terabytes of memory each, in addition to other servers.
With a theoretical peak performance of about 10 petaflops, the system will also comprise a 24 petabyte storage system, 100Gbps InfiniBand EDR interconnect, and Fujitsu's scalable cluster file system, FEFS.
It will also be Japan's first supercomputer system featuring a large-scale private cloud environment constructed on a front-end sub system, linked with a computational server of a back-end sub system through a high-speed file system.
The Riken Center for Advanced Intelligence Project in Japan received its own deep learning supercomputer in April, which will be used to accelerate research and development into the "real-world" application of AI technology.
The system is comprised of two server architectures, with 24 Nvidia DGX-1 servers -- each including eight of the latest Nvidia Tesla P100 accelerators and integrated deep learning software -- and 32 Fujitsu Server Primergy RX2530 M2 servers, along with a high-performance storage system.
Its file system is also FEFS on six Fujitsu Server Primergy RX2540 M2 PC servers; eight Fujitsu Storage Eternus DX200 S3 storage systems; and one Fujitsu Storage Eternus DX100 S3 storage system to provide the IO processing demanded by deep learning analysis.
The Czech Hydrometeorological Institute also announced it would be receiving a supercomputer on Thursday, with NEC to provide the government agency with scale-out LX series compute servers for weather forecasting.
It is expected the NEC cluster will enable the Czech Hydrometeorological Service to increase the accuracy of numerical weather forecasting and related applications, such as warning systems.
The NEC system will deliver the computational power of more than 300 nodes, connected through a high-speed Mellanox EDR InfiniBand network, and containing Intel Xeon E5-2600 v4 product family dual socket compute nodes, with a total of over 3,500 computational cores.
The new system is more than 80 times faster than the currently used system, and will be operational come early 2018, the company said.
This HPC solution also consists of a high-performance storage solution based on the NEC LXFS-z parallel file-system appliance, with over 1 petabyte of storage capacity and bandwidth of more than 30Gbps, which are required to meet the production needs of the weather institute.