Hot Seat: CEO Megh Computing on Fulfilling the Promise of Intelligent Video Analytics

PK Gupta of Megh Computing joins the conversation to talk about publishing and customizing video analytics and more.

Megh Computing is a fully customizable, cross-platform video analytics solution provider for actionable, real-time insights. The company was founded in 2017 and is headquartered in Portland, Oregon, with development offices in Bangalore, India.

Co-Founder and CEO PK Gupta joined the conversation to talk about analytics deployment, personalization, and more.

With technology constantly moving to the edge with video analytics and smart sensors, what are the trade-offs versus cloud deployment?

Gupta: The demand for edge analytics is growing rapidly with the explosion of data flow from sensors, cameras and other sources. Among these, video remains the dominant data source with more than a billion cameras spread globally. Businesses want to extract intelligence from these data streams using analytics to create business value.

Most of this processing takes place incrementally at the edge close to the data source. Moving data to the cloud for processing incurs transmission costs, potentially increasing security risks and introducing latency in response time. And then intelligent video analytics [IVA] moves to the edge.

pk gupta headshot

Prabhat K. Gupta.

Many end users are interested in sending video data abroad; What options are available for on-premises processing while taking advantage of the benefits of the cloud?

Gupta: Many IVA solutions force users to choose between deploying their on-premises solutions at the edge or hosting in the cloud. Hybrid models allow on-premises deployments to take advantage of the scalability and flexibility of cloud computing. In this model, the video processing path is split between on-premises and cloud processing.

In a simple implementation, only metadata is redirected to Zipper for storage and search. In another application, the data is ingested and transformed at the edge. Only frames with activity are forwarded to the cloud for analytics processing. This model is a good compromise between balancing latency and costs between high-end processing and cloud computing.

Image-based video analytics historically have required filtering services due to false positives; How does deep learning reduce these?

Gupta: The IVA’s traditional attempts did not meet the companies’ expectations due to limited functionality and poor accuracy. These solutions use image-based video analytics with computer vision processing to detect and classify objects. These technologies are prone to errors that necessitate the need to deploy filtering services.

In contrast, techniques using optimized deep learning models trained to detect people or objects along with analytics libraries for business rules can essentially eliminate false positives. Special deep learning models can be created for custom use cases such as PPE compliance, collision avoidance, etc.

We hear “custom use case” frequently with video AI; what does it mean?

Gupta: Most use cases should be customized to meet the functional and performance requirements of an IVA offering. The first level of universally required customization includes the ability to create a file watching Zones in the field of view of the camera, set up thresholds for analytics, configure alarms and set up frequency and recipients of notifications. These configuration capabilities must be provided via a dashboard with graphical interfaces to allow users to set up analytics for proper operation.

The second level of customization involves updating the video analytics pipeline with new deep learning models or new analytics libraries to improve performance. The third level involves training and deployment of new deep learning models to implement new use cases, for example, a model for detecting personal protective equipment for worker safety, or for calculating inventory items in a retail store.

Can smart sensors like lidar, presence detection, radar, etc. be integrated into an analytics platform?

Gupta: IVA usually only processes video data from cameras and provides insights based on image analysis. Sensor data is usually analyzed by separate systems to produce insights from lidar, radar and other sensors. A human factor is introduced into the loop to combine results from disparate platforms to reduce false positives for specific use cases such as employee validation, etc.

An IVA platform that can ingest data from cameras and sensors using the same pipeline and use machine learning-based contextual analytics can provide insights for these and other use cases. The Contextual Analytics component can be configured with simple rules and then can learn to refine the rules over time to provide highly accurate and meaningful insights.

Leave a Comment