in Deep Learning, Level Crossing, Warblington

Artificial intelligence for community applications

Anyone who lives in Warblington or Denvilles knows what a pain the local level crossing can be when you need to get anywhere in a hurry. The nearest footbridge is around 11 minutes walk away and to drive around the crossing some other way involves a 3 mile detour.

In 2016 I created a Twitter Bot running off Network Rail live train data. This gives a thirty minute prediction of when the barriers will be open or closed and is updated every five minutes. The purpose was to help the community in the local area plan their journeys over the crossing to best effect. Whilst useful, its not perfect. It depends upon the quality of the data feed, characteristics of the signaller on duty, doesn’t pick up freight or engineering trains and is subject to some inaccuracies due to the specific timing of trains departing Havant. I’ve therefore been looking for a while at other solutions to augment the prediction data.

Recently, I’ve been playing around with software that uses deep learning techniques (a branch of Artificial Intelligence) to recognise features in video and images. I realised that determining whether the barriers are open or closed is a relatively straightforward example of deep learning for computer vision. On that basis I developed a Convolutional Neural Network (CNN) and trained it with a large number of images of the barriers either open or closed (yes, the collection and manual data labelling process was as tortuous as it sounds…). I managed to train the network to an accuracy in excess of 99%, more than adequate for this application when you consider the barrier isn’t going anywhere and is viewed from the same aspect all of the time. The architecture of the Neural Network is shown right.

The code was developed using Keras running on top of TensorFlow. Whilst I can work with both I have to say that working with Keras is a delight. It really lets you focus on the network architecture rather than worrying about too many details in the code itself. This lends itself to much more rapid prototyping and less errors too. I haven’t found Keras performance to be an issue in the slightest.

Initially I developed quite a large CNN. It was very accurate, whilst taking a while to train. However, when I moved from my development machine to install the model on the Raspberry Pi that I use for the Twitter Bot the model was unfortunately too large. After some pruning of the network to get it to a manageable size whilst retaining sufficient accuracy I implemented the code on the Pi3, plugged in a webcam, pointed it at the level crossing and hey presto the software can now determine when the barrier has either come up or gone down and will post that information to twitter.

If you have any comments on the usefulness (or otherwise) of the Twitter Bot, do let me know.

Write a Comment

Comment