阅读 139

TVM性能评估分析(六)

TVM性能评估分析(六)

TVM性能评估分析(六)

 

 

 Figure 1.  The workflow of development PC, compile, deploy to the device, test, then modify the codes again to see whether it accelerates.

 

 

 Figure 2.   The Android APP takes shared library as input and runs compiled functions on the mobile phone. 

 

 

 Figure 3.  Build TVM functions and NDArrays on a remote device. The ability to cross-compile to different platforms makes it easy to develop on one platform and test on another.

 

 

 Figure 4.  The instruction to build for your Android device. Once the APK is built, sign it using apps/android_rpc/dev_tools and install it on the phone. 

 

 

 Figure 5.  The NNVM compiler support of TVM stack, we can now directly compile descriptions from deep learning frameworks and compile them to bare metal code that runs on AMD GPUs.

 

 

 Figure 6.  With ROCm backend, the generic workflow 

 

 

 Figure 7.   The ONNX library to load the ONNX model into the Protocol buffer object. 

 

 

 Figure 8.  An end to end compilation pipeline from front-end deep learning frameworks to bare metal hardwares.

 

Figure 9.  Typical workflow of NNVM Compiler

 

 

 Figure 10.  Separation of Optimization and Deployment

 

 

 Figure 11.  Time Cost of Inference on K80

 

 

 Figure 12.  The cost of inference on Raspberry PI

人工智能芯片与自动驾驶

来源https://www.cnblogs.com/wujianming-110117/p/14826997.html

文章分类
后端
文章标签
版权声明:本站是系统测试站点,无实际运营。本文内容由互联网用户自发贡献,该文观点仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌抄袭侵权/违法违规的内容, 请发送邮件至 XXXXXXo@163.com 举报,一经查实,本站将立刻删除。
相关推荐