There’s a scene in Black Panther where Shuri, Letiticia Wright’s character, hops into a car seat and remotely drives a sleek Lexus Sedan through the streets of Busan, Korea. While not quite a driverless car experience, remote driving offers a tantalizing imagining of the future of transportation.
These are, after all, heady times for the transportation industry. Google and Uber are testing self-driving cars at this very moment. Drone deliveries from Amazon will happen sooner rather later. Suddenly, former figments of science fiction, driver-less cars, instantaneous delivery via unmanned aircraft, spaceships to Mars, and trains that cross continents in mere minutes– look very real.
But all of these transportation advancements are going to need two big things: video and deep learning.
The Future of Transportation
Let’s take the first example of self-driving cars. In order for self-driving cars to become an everyday reality, cars will need to be able to “see”, to take in data from the environment and the road. Sensors in the back of the car are commonplace now, helping drivers back out of tight parking spaces, for example. But in order for a self-driving car to be “safe”, it will need to mimic the sensors of a human, always monitoring visual stimuli on the road not just in front of them, but around them as well. Unfortunately, the Uber self-driving car fatality in Arizona stands as an example of when that standard of safety fails.
If we break down the scene from the movie clip above, we see a video projection of the environment outside of the car, an example of embedded vision or computer vision. Replicating how humans see with computers is an incredibly difficult task. Scientists are still working to uncover how exactly the brain translates input from human eyes and converts that into knowledge.
What has been accomplished in computer science are the kinds of tasks that many able-bodied humans perhaps take for granted. These tasks include objection detection, face recognition, human activity recognition, and activity recognition in video data.
In order to process that data in real-time for cars, there are two options: process video within the car or send it to a home base. Manufacturers are working on having wifi-enabled video streams, that will send the packets of video data. But that still leaves the issue of video data storage. While cloud data infrastructures make it much easier to store and process gigantic amounts of data, driver-less cars will rely on hundreds of gigs of video data per second (Computer World) which will push the current technology to its limits.
In Australia, two swimmers were rescued thanks to an unmanned aerial vehicle, or a done. Lifeguards on the beach happened to be learning how to use the Westpac Little Ripper Lifesaver drone. when they were notified of two swimmers that been pulled out to sea by a rip tide. The lifeguards navigated the drone over to the swimmers and dropped a life saver, This was the first time ever that drones were used in a rescue mission (National Geographic). Here was the video of the rescue taken of the operation:
Thanks to an ability to stream live video data, above, the lifeguards were able to identify the swimmers. They then triggered the drone to drop a lifesaver. But what if the drone could identify swimmers in danger on its own, without the need for human identification?
The future of drones depends on AI powered video analytics. The researchers at Westpac, the same group that developed the rescue drone, have also developed a new AI powered autonomous detection system for sharks. This push for autonomous detection could have numerous applications for safety operations.
It’s a common problem for security guards monitoring monitoring live videos feed, watching TV screens can be monotonous. According to IBM, the average attention span for a security guard monitoring a screen is 22 minutes. If big data analytics like activity detection could be applied to streaming video from a live drone or a CCTV camera, those monitoring video feeds could be automatically alerted if suspicious activity was found. Expediency in crisis situations is incredibly important. In the drone rescue operation example from Australia, the whole rescue of the swimmers was actually faster with a drone than it would have been with lifeguards in the water.
But objects and people can look quite different from the skies. A black car on the road from up above may look like an iPhone to a computer, because of its rectangular shape. It takes exposure to more and more video data to improve tracking and detection video algorithms.
Human Safety in the Age of Machines
Safety is paramount in the transportation industry. Regulating and maintaining real-time functions like drone deliveries and handling the flow of traffic will require vast amounts of video data. In order to scale the structure, storage, and analysis of video deep learning data pipelines must be applied and new technologies must be created.
We predict that in a interconnected web of internet of things (IoT) applications, video and the live streaming of video data with real-time analytics will be the oil that keeps the complicated network of vehicles running smoothly.
Imagine video data coming in from roads and the skies, paired with social data coming from large events– coupled with analytics, these sources could all be used to predict the flow of traffic or illuminate the safest path home. With digital transformation and the technology boom in transportation, sky’s the limit.
Illustrations by Alexis Low
Subscribe to our email Newsletter
Sign up for Scraawl news, tips & tricks, and latest blogs. Don’t worry, we never ever spam our listserv and we only email once a month.