Part 1 of 2: Computer Vision Edges Out other Technologies
Data is the oil of the 21st century. The representation of data as a commodity first appeared in an article by The Economist titled “The world’s most valuable resource is no longer oil, but data.” Although this manner of describing data has sparked some debate, it is mostly undisputed that data is valuable. The importance of data is probably best seen in the valuation of companies around the world. In 2019, seven of the ten largest companies in the world by market value were tech companies that used data in their core business practices.
The application of data-derived insights varies greatly between businesses. While some apply data to develop their businesses (e.g., online shops aiming to sell more products), others use it to optimize their products (e.g., car producers aiming to increase their cars’ safety). However, all of these businesses share the same goal, which is to strengthen their companies using the insights they gain from data. Therefore, data is not only important, but it is the lifeblood of any enterprise wishing to remain relevant in the 21st century.
With the emergence of computer vision solutions, companies can now tap into the same potential in the physical world as in the virtual world. In this two-part blog post, I examine the significant role that computer vision plays in unlocking the value of data in the physical world.
There is Data in the Physical
Much of the data generated to this day is still limited to the virtual world. Six of the seven most valuable tech companies earn the bulk of their revenue through their online channels (Apple being the exception).
Before the advent of computer vision, companies struggled to garner and tap into real-world data. One of the main reasons for this is that the physical world involves much more complexity and ambiguity than the online world. Whereas the virtual environment follows clearly defined rules, the physical world is ever-changing and often breaks rules. Therefore, to gain insights into the physical world, companies must use especially sophisticated methods to understand their surroundings.
From Device Tracking to Generic Sensors to Computer Vision
While many technologies allow for an interpretation of the physical world, they vary greatly in their capabilities and levels of complexity. While some basic tasks require only minimal effort, others (e.g., predicting people’s behavior) require much more complex computer vision solutions.
For simple insights, companies often rely on existing infrastructure. For example, tracking mobile devices via GPS, Wi-Fi, or Bluetooth might provide companies with a rough understanding of their environment by supplying information such as the number and movements of people in a certain area. However, the added value of such solutions is limited to specific use cases. For any additional insights, device tracking solutions do not fulfill the necessary requirements.
Due to the limitations of the given infrastructures, most companies rely on dedicated sensors to understand the world around them. But even these technologies vary greatly in complexity. Many generic devices, such as infrared sensors, are limited to specific use cases, such as counting the number of passersby. Even more complex solutions, such as thermal sensors, often require controlled environments with well-defined limits to function properly.
Embracing the Complexity with Computer Vision
Of all currently available sensor solutions, computer vision solutions feature the highest degree of complexity. In computer vision, visual sensors capture an unfiltered portrayal of their surroundings. Sophisticated computer vision algorithms then interpret the sensor data, allowing for a depth of insights that no other sensor solution can offer.
While other sensors must simplify their environment to complete their tasks (e.g., people counting), computer vision algorithms intentionally capture the complexity of their surroundings. Understanding the ambiguity of the physical world instead of simplifying it is essential in real-world applications, in which context influences future decisions.
Computer vision has dramatically changed the possibilities of entire businesses and has even created new fields, such as autonomous driving. Before the advent of computer vision algorithms, cars relied almost entirely on human input. Apart from limited use cases such as emergency braking and line keeping, cars could not intervene, let alone drive autonomously.
But now, developers of self-driving solutions, such as Tesla’s Autopilot and Waymo, have implemented computer vision algorithms to train their neural networks. Understanding even the most complex and most ambiguous contexts is essential to advancing autonomous driving. Only after this understanding is achieved will it be possible to eschew human input in the context of driving. Computer vision is thus imperative in capturing such unique surroundings.
Besides autonomous driving, other industries like retail and commercial real estate increasingly rely on computer vision solutions to build innovative applications. In the second part of this blog post, I will provide insights into how Advertima uses computer vision to power such applications (e.g., the Advertima Smart Signage solution and the Advertima Autonomous Store).