"Revolutionizing AI Deployment: Unveiling Qualcomm® AI Hub"

 


What you can do on Qualcomm's AI Hub :

  • You can choose between IOT and Mobile Platform
  • You can choose the ML models from the list of models available there , and can see already executed results on the edge device .
  • You can run ML models on available device and get the benchmarks for the performance analysis of model.
  • You can get runtime Layer details of any ML Model.
  • You can study the architecture of a ML  model.
Signup for Qualcomm ID:

So first of all do sign up for Qualcomm Id to access the AI Hub :signup Qualcomm id




 IOT or Mobile Platform

You can choose between IOT or Mobile according to your use case :


Mobile Platform Models
we will explore the Models for the Mobile :

Select model of your choice :







so here you can see the list of model which we can execute on Device using AI Hub , there are many more you can explore here :Models for Mobile AI Hub

here you can the supported chipset and devices  : Click me



We will see the example of inception_V3 to execute through AI Hub on-Device :



See the results without actually executing the model :

here you can see the Benchmarking and other technical details of selected model without actually executing it : 

Executing model by yourself to get results :

you can execute model by yourself also just need to signup and create a Qualcomm Id : 

Now we will see the steps to setup the environment to execute the model :

How does it work?

Simply upload your model, and Qualcomm® AI Hub handles the rest: auto-optimizing for target devices, validating performance and numerics, and repeating the process until your model is ready to deploy.

Installation

 Python > 3.8 and < 3.10 versions is required.

you may face SSL version issue due to conflict of python of yours and server so for different model try to use python version which suits that model.

run the following commands in terminal :

  • pip3 install qai-hub
Sign in to Qualcomm AI Hub with your Qualcomm® ID. After signing in, navigate to[your Qualcomm ID] -> Settings -> API Token. This should provide an API token that you can use to configure your client.
  • qai-hub configure --api_token API_TOKEN
One configured, you can check that your API token is installed correctly by fetching a list of available devices with the following command
  • qai-hub list-devices

Now we are done with the setting up enviroment to execute model part :

we can now run the example model (Inception_V3) through command line :


Once installed, run the following simple CLI demo:
  • python -m qai_hub_models.models.inception_v3.demo
Export for on-device deployment :
  • python -m qai_hub_models.models.inception_v3.export

Runtime Layer details :

after the execution is done you check the results generated , profiling and other techincal details
of model on your Qualcomm profile : https://app.aihub.qualcomm.com/jobs/

 like here you can see the results of all your execution on your Qualcomm profile  


Runtime Configuration and Runtime Logs of Model

You can get various other details  like Runtime Configuration , Runtime Logs and can get more insights of model 


Study Architecture of Model

you can also visualize the architecture of Models Layers using the visualization tool there , and you can 
get details of each  layers inference time in your Qualcomm profile so you can analyze the results in better way : (here we have small part of architecture of Inception_V3 layers )

so you can choose any type of model from the list of models on the AI Hub and execute on device and get the inference matrices.



Comments

Popular posts from this blog

Examples of running Machine Learning Model on Device using Qualcomm AI HUB

Medical Report Analyzer - Progess

Running Inception_V3 using On-device AI