Today’s video telematics systems are capable of showing you what you need to see, and they’re getting smarter, using artificial intelligence to identify potential safe driving infractions and aim to reduce the amount of video footage you have to parse through. So, you’ve pulled the trigger to start using your truck telematics to help you improve operational safety and inform driver training. And then your phone explodes with notifications back to back — turns out one driver had harsh braking, following distance and lane departure warning incidents in rapid succession. Naturally, you fear the worst. But what actually happened, and how do you approach the conversation? Just looking at notifications, it’s hard to say. Was your driver not paying attention? Did a car cut off your truck? Answering all of these questions comes down to context—what actually happened during the event in question? The latest in-cab video system updates leverage artificial intelligence to better identify other vehicles, pedestrians and even your driver’s behavior. That’s great, but this creates a new problem: Who is going to sit there and watch all that footage? There are now video solutions that use machine learning to continually refine what is important to fleets, a trend that has continued to grow in the video telematics space. In the olden times, fleets would only capture video, for example, during a hard braking or swerving event, but today with artificial intelligence, it’s able to identify things like stop signs or maybe that the truck continues going moving and doesn’t stop at the sign—and it captures video of that event for a coaching opportunity. As algorithms become more complex and artificial intelligence becomes smarter, insight into your drivers’ behavior and habits will become more actionable. These systems are now actually moving beyond object detection and behavior detection and moving toward action detection. Action detection is what the human brain does really well and computers, a lot of the time, tend to do poorly. So, for example, if you see a video of a car cutting off a truck, your awesome human brain thinks, “What a jerk!” Show that same video to a computer, most of the time the analysis is frame by frame. It says, ‘Nothing’s in front of me’ for a while, then, all of a sudden, ‘Something’s in front of me’ and that’s all of the information you get. With action detection, instead of analyzing data frame by frame, these cameras analyze and compare data across a time horizon. So now we see that everything was clear, the truck was maintaining the proper following distance, and then something imposed itself in front of the truck, rather than the truck running up on the car in front of it. This is important because action detection will lead these companies in the direction to develop algorithms that can then recommend the right coaching plan, where A.I. will actually inform the fleet manager what the context of a video means for that driver’s behavior.