背景大图

TECHNOLOGY

At Deephi, we are known for providing state-of-the-art deep learning technologies. Our neural network compression technology and neural network hardware architecture have deeply influenced the field of AI, shaping the future of deep learning.


DNNDK

With the Deephi Neural Network Development Kit(DNNDK), Deephi provides a light-weight set of programming interfaces while maintaining a low learning curve for developers familiar with standard programming languages like C and C++.

Deep Compression

Deephi’s deep compression tool is able to compress various neural networks including CNN, RNN and LSTM more than 30-90 times without compromising accuracy. This in turn significantly reduces the computing power.

DNNC

Deephi’s DNNDK offers a proprietary DNNC, (compiler) that is designed to load the neural network models and compile them into instructions within minutes.



Hardware Architecture

In order to compute convolutional neural networks (CNN) , Deephi designed the Aristotle Architecture from the ground up. While currently used for video and image recognition tasks, the architecture is flhhhexible and scalable for both servers and portable devices.

Descartes Architecture

Deephi's Descartes Architecture is designed for compressed Recurrent Neural Networks (RNN) including LST`M. By taking advantage of sparsity, the Deephi Descartes Architecture can achieve over 2.5 TOPS on a KU060 FPGA at 200MHz allowing for instantaneous speech recognition, neural language processing, and many other recognition tasks.