Accident investigation involves a large amount of survey work, often over large areas, for parts that may be partially buried. The site can be hazardous and the evidence may deteriorate quickly due to site conditions and any rescue operations. Mapping all this out and structuring any data to support the long term investigation of the accident which may go on for months or more afterwards is a massive task. Read more about the complexity of accident sites here.
Within this context is there a role for AI?
There are already many systems in use to quickly generate data. LIDAR survey systems from the building industry are used by some. Drones have also found a place in accident investigation for taking aerial site photos, mapping, photogrammetry, and working out what the pilot picture might have been on approach to the site. These and more have been used to great effect along with more conventional photographic and pen and paper methods.
However current systems based on drones and commercial LIDAR surveys are time-consuming and can require lengthy post-processing to remove moving objects from the data such as might be caused by colleagues and others walking around the scene to perform rescue or other critical activities.
The big question is always metadata. Data about the data. Attaching labels, to photos, coordinates, and other information. What is actually in a photo? What does it mean? Humans have no way to know without looking at it so collating all photos of the same object or pieces of the same object is non-trivial. Assigning further meaning to what is in the photo requires further work. This situation is ripe for intelligent systems to provide support and this is a growing use case in the wider industry.
AI systems are becoming ever more accessible through active development work. This is making it more possible to build relevant training sets for use cases such as accident investigation site survey and labelling and to then create AI that can be used in this role. The emergence of hosted solutions which are often free both for programming tuition and to explore new topics has vastly lowered the barriers to entry to test out new methods.
Now all that is needed is a computer of some kind and internet access. Some of the services can even be run in the browser on a smartphone.
One is Google Colab which is an impressive service including Graphical Processing Units (GPU) attached to the server instances running the notebooks to enable accelerated processing.
Another is Binder which not only allows you to work on notebooks stored in Github but the entire system is open source and can be replicated on your own systems if you so wished. They have created an impressive infrastructure where various institutions donate processing time to enable the public to have the benefit of this free service for sharing and learning programming.
Much of this has been made possible via the use of Jupyter notebooks (a part of the Python ecosystem) but the configuration of different software libraries and dependencies for things like faster algorithms or GPU support was only made practical through the use of container technologies. These are starting to replace virtual machines as a way to create a computing environment wherever required and be sure it has been configured correctly so that your code will run.
It is within this context that we can now take low-cost AI-enabled cameras such as the Oak-D by Luxonis and create our own custom models to detect within images and video the types of things relevant to our challenges. It is difficult to understate how far things have come in terms of capability, usability and documentation. We can even train our own AI on Google Colab which we can then embed in smart cameras like the Oak-D.
This is an example of the fabled edge AI architecture. AI is embedded in the device that needs to be enhanced in some way. In our case to be able to label objects in the images so they can be more easily indexed and searched afterwards.
This then is the start. AI could be useful. But there are of course downsides. What happens when labels are incorrect? What is an acceptable accuracy rate? Who controls the AI and systems we use? Manufacturers? Regulators?
If we want to alter the AI system in our device to better serve our needs currently we can. However, this is due to the majority of available platforms still being DIY to a large extent. What about when an AI-enabled accident investigation system becomes a product to buy? What happens when perhaps it is found it cannot detect accident features that involve specific items? Perhaps because those items did not exist at the time the AI was trained. Can we update it? Will the company? Is it in their interests? It draws in wider questions around ownership and freedom which have been better discussed by others.
AI could certainly be a very useful tool in the investigator's arsenal. It just has disadvantages to go with it the same as every other tool we ever have. That’s just what makes these things so interesting.
Interested in learning more? Our Cranfield Safety and Accident Investigation Centre (CSAIC) offers many accident investigation-focused CPD courses. Get in touch for further information.
Have you read our recent blog post 'Why cricket and trees help you work with evidence' where Alan Parmenter, Manager, Accident Investigation at Cranfield University discusses site management and working with evidence during accident investigations?