ASE Labs
Welcome Guest. Please register or log in now. There are 72 people online (0 Friends).
  • Home
  • Articles
  • News
  • Forum
  • Register/Login

Inspur Shares Innovative Deep Learning Technology at GTC16

Poster: SySAdmin
Posted on April 5, 2016 at 10:42:01 PM
Inspur Shares Innovative Deep Learning Technology at GTC16

SAN FRANCISCO, April 5, 2016 /PRNewswire/ -- Inspur released the Caffe-MPI, a multi node parallel version open source framework for Deep Learning, at the 2016 GPU Technology Conference (GTC16), which is being held from April 4-7 in Silicon Valley, California.

Inspur also announced its plan to launch a Deep Learning Speedup Program (DLSP), aimed at facilitating the accelerated development and efficient application of Deep Learning -- from the perspectives of hardware infrastructure, system optimization and parallel framework.

Caffe-MPI to Speed Up Deep Learning

The newly released version of Caffe-MPI features excellent cluster parallel scalability. Testing data shows that in a 4-node environment, the performance of the new version with 16 GPU cards is 13 times higher than the single GPU card version. Another feature of the new version is its support for the cuDNN library, which makes high-performance Deep Learning code development much easier for program developers.

DLSP Program to Facilitate Deep Learning Ecosystem Construction

During GTC16, Inspur announced its plan to launch the Deep Learning Speedup Program (DLSP), aimed at accelerated development and efficient application of Deep Learning from three perspectives: the innovation of hardware infrastructure, optimized system design, and improved parallel framework.

In the innovation of hardware infrastructure, Inspur plans to focus on the research and development of the offline training server, incorporating the latest Nvidia M40 GPU and the next generation Pascal GPU. Another focus is online identification applications based on M4 GPU, aimed at developing a GPU computing platform with better performance per watt.

In the optimized system design, Inspur will put together a team specializing in Deep Learning, based on the parallel computing laboratory - jointly established with NVIDIA - which will develop customized optimized solutions, based on the application demand for deep learning in various industries. This enables balanced design in system computing, storage and network, while fully tapping the potential of the system and ensuring satisfactory manageability.

In the improved parallel framework, Inspur will continue to increase its investment in the open-source project of the Caffe Deep Learning framework to attract more developers and users to get involved in community building. Currently, the open-source Caffe-MPI spearheaded by Inspur has attracted the attention of numerous companies and research institutes in China, India and the U.S.

Innovative Deep Learning: Enabling AI to Serve Society

For Inspur, the three Deep Learning plans announced have been, to a large extent, a result of the accomplished experience gathered from serving world-class internet companies such as Baidu, Alibaba and Tencent, enabling Inspur to build up strong R&D and innovation capability. Additionally, this has allowed Inspur to gather even further experience in internet data center products, more confidence in creating a Deep Learning computing platform to meet the demands of the internet, and other fields.

At present, Inspur's Deep Learning solutions have been applied in numerous internet companies including Tencent, Baidu, Alibaba, Qihoo, Iflytek and Jingdong, supporting "super brains" of various types, and providing intelligent services for society. With the three Deep Learning projects gradually rolling out, it is expected that Inspur's Deep Learning Solutions will be adopted by more companies in the future.

A key area of focus is that Inspur also presented NX5460M4, a Deep Learning server for industrial customers. The NX5460M4 is a high-performance blade server of Inspur I9000; a converged architecture of the blade server series specially optimized for Deep Learning applications, which supports a maximum of eight Deep Learning computing nodes and 16 GPU accelerator cards in a 12U space, as well as high-density servers, 4- and 8- socket key business servers, software defined storage and multiple computing schemes. This includes heterogeneous computing aimed at providing commercial corporate customers with the Deep Learning infrastructure, featuring high reliability, and high performance.

SOURCE  Inspur Group Co., Ltd.

Inspur Group Co., Ltd.

CONTACT: Yan Panpan, T: +86 (0)10-82581473, M: +86 18710190569, yanpanpan@Inspur.com
 
Print This Entry
Tags PR Press Release
Related Articles
  • Huntkey Has Launched Its New Power Strips with USB Chargers on Amazon US
  • Inspur Releases TensorFlow-Supported FPGA Compute Acceleration Engine TF2
  • Hot Pepper Introduces Spicy New Smartphones in US Markets
  • Sharp Introduces New Desktop Printers For The Advanced Office
  • DJI Introduces Mavic 2 Pro And Mavic 2 Zoom: A New Era For Camera Drones
Login
Welcome Guest. Please register or log in now.
Forgot your password?
Navigation
  • Home
  • Articles
  • News
  • Register/Login
  • Shopping
  • ASE Forums
  • Anime Threads
  • HardwareLogic
  • ASE Adnet
Latest News
  • Kingston HyperX Cloud 2 Pro Gaming Headset Unboxing
  • Synology DS415+ Unboxing
  • D-Link DCS-5020L Wireless IP Pan/Tilt IP Camera
  • Actiontec WiFi Powerline Network Extender Kit Unboxing
  • Durovis Dive Unboxing
  • Bass Egg Verb Unboxing
  • Welcome to the new server
  • Gmail Gets Optional Preview Pane
  • HBO Go on Consoles
  • HP Touchpad Update
Latest Articles
  • D-Link Exo AC2600 Smart Mesh Wi-Fi Router DIR-2660-US
  • HyperX Double Shot PBT Keys
  • Avantree ANC032 Wireless Active Noise Cancelling Headphones
  • ScharkSpark Beginner Drones
  • HyperX Alloy FPS RGB Mechanical Gaming Keyboard
  • D-Link DCS-8300LH Full HD 2-Way Audio Camera
  • Contour Unimouse Wireless Ergonomic Mouse
  • HyperX Cloud Alpha Pro Gaming Headset
  • Linksys Wemo Smart Home Suite
  • Fully Jarvis Adjustable Standing Desk
Latest Topics
  • Hello
  • Welcome to the new server at ASE Labs
  • Evercool Royal NP-901 Notebook Cooler at ASE Labs
  • HyperX Double Shot PBT Keys at ASE Labs
  • Avantree ANC032 Wireless Active Noise Cancelling Headphones at ASE Labs
  • ScharkSpark Beginner Drones at ASE Labs
  • HyperX Alloy FPS RGB Mechanical Gaming Keyboard at ASE Labs
  • D-Link DCS-8300LH Full HD 2-Way Audio Camera at ASE Labs
  • Kingston SDX10V/128GB SDXC Memory at ASE Labs
  • What are you listening to now?
  • Antec Six Hundred v2 Gaming Case at HardwareLogic
  • Sans Digital TR5UTP 5-Bay RAID Tower at HardwareLogic
  • Crucial Ballistix Smart Tracer 6GB PC3-12800 BL3KIT25664ST1608OB at HardwareLogic
  • Cooler Master Storm Enforcer Mid-Tower Gaming Case at HardwareLogic
  • Arctic M571-L Gaming Laser Mouse at ASE Labs
  • Contour Unimouse Wireless Ergonomic Mouse at ASE Labs
Advertisement
Advertisement
Press Release
  • Huntkey Has Launched Its New Power Strips with USB Chargers on Amazon US
  • Inspur Releases TensorFlow-Supported FPGA Compute Acceleration Engine TF2
  • Hot Pepper Introduces Spicy New Smartphones in US Markets
  • Sharp Introduces New Desktop Printers For The Advanced Office
  • DJI Introduces Mavic 2 Pro And Mavic 2 Zoom: A New Era For Camera Drones
  • DJI Introduces Mavic 2 Pro And Mavic 2 Zoom: A New Era For Camera Drones
  • Fujifilm launches "instax SQUARE SQ6 Taylor Swift Edition", designed by instax global partner Taylor Swift
  • Huawei nova 3 With Best-in-class AI Capabilities Goes on Sale Today
  • Rand McNally Introduces Its Most Advanced Dashboard Camera
  • =?UTF-8?Q?My_Size_to_Showcase_Its_MySizeId=E2=84=A2_Mobil?= =?UTF-8?Q?e_Measurement_Technology_at_CurvyCon_NYC?=
Home - ASE Publishing - About Us
© 2010 Aron Schatz (ASE Publishing) [Queries: 16 (8 Cached)] [Rows: 292 Fetched: 35] [Page Generation time: 0.010041952133179]