The secret to winning in Formula One racing or any other motor sport, is simple: Go faster than everybody else. But finding a lasting recipe for success on the racetrack today requires getting hundreds of variables right, so it’s no wonder that F1 teams are turning to big data analytics for help.
Data analytics has become a critical factor in Formula One racing. It’s difficult, because nobody talks about it too much, but they understand the importance of it.
An F1 Grand Prix is a three-day weekend event that typically includes two days of qualifications followed by a race on Sunday. On Friday, the racecar is loaded with sensors so the team can see what’s working and what’s not. By Saturday, the car is stripped of all but the most critical sensors, and the car racing on Sunday must be identical to the Saturday car.
Getting the car set up for each race is a really hard process that involves balancing a variety of variables, including: What type of track are they running on? How did the racecar do last year? How much downforce should they shoot for? How much air should go in the tires? How quickly should the transmission shift gears? What’s the weather going to be like? What did the wind tunnel tests show? And how do you expect other teams to configure their racecars?
When you tweak one variable in the configuration, it can have an impact on others, creating a cascading effect that leaves the racecar better or worse off. It’s the engineer job to use an array of data processing and analysis techniques to position his team to determine which configurations will lead to more favorable outcomes.
On a Friday, a Formula 1 team race car could be equipped with up to 300 sensors that measure all aspects of the car, such as the temperature of the transmission fluid, the state of the engine oil, the pressure and temperature of the tires, the car’s ride height, etc. The number of sensors is paired down on the Saturday and Sunday cars to save weight.
The sensors sample conditions anywhere from a thousand times per second to one second intervals. All told, about 18,000 channels of data are flowing in the team’s servers, amounting to about 500GB of data per race weekend, The engineers says that over the course of a season, that translates into about 10TB of data.
Considering that the team brings the previous season’s data to each race and sometimes the data from the year before that too, it all adds up to lugging about 30TB of baseline data from race to race around the world.
During a weekend race, each team might have 30 employees analyzing data at the track, while anywhere from 30 to 200 workers are analyzing the data back at racing headquarters. They are always producing more and more data. With the amount of data that is generated, they´re limited on the number of eyes that can analyze that data.