Nov 30, 2018

Travelling Salesman Challenge 2.0 wrap-up

How we found the winners of a trip around the world based on an algorithm they wrote

We’ve already known the winners of the Travelling Salesman Challenge 2.0 for some time now, but there are still some things we’d like to share with you. In this wrap-up we are bringing you:

  • all the datasets that were used during the two evaluation phases and the final results — the development one and the final one, so that everyone can see and learn from your mistakes;
  • a brief introduction of the winning algorithm by its authors — Petr Váňa, Robert Pěnička, and Vojtěch Vonásek;
  • short comparison of the last two data challenges from last year’s winner Petr Lavička, who explains why he won last year but failed this year’s challenge.

The challenge and its preparations

Since the Kiwi.com team that prepared last year’s challenge is busy polishing NOMAD — Kiwi.com’s feature solving the Travelling Salesman Problem using real-life data — we teamed up with a Polish company called Sphere Research Labs which helped us prepare the contest.

After numerous meetings and rearrangements, we prepared an assignment that is well known by all competitors in the Travelling Salesman Challenge 2.0. This year, we asked the contestants to determine the cheapest possible connection between specific areas, with an area being a set of cities.

The solution was supposed to meet certain criteria, such as the trip had to start from a city we set; in each area, it was necessary to visit exactly one city; it was obligatory to move between areas every day, and the trip had to continue from the same city. Also, the entire trip had to end in a city located within the starting area.

It wasn’t easy to come up with a successful solution; however, preparing the task wasn’t easy either.

“The most exciting phase when creating the assignment was preparing the test cases. The tests based on real Kiwi.com data required a lot of work with the huge database of flight connections and we had to choose the right subsets of connections so that they would be adequately versatile,” explains Mateusz Kaczkowski, Senior Sales Specialist at Sphere.

At the same time, Sphere Labs had to prepare artificial tests that were supposed to be a bigger challenge for the participants. “We created the small tests in a way that finding any trip plan was very challenging, or almost impossible, for the chosen greedy algorithms’ techniques,” Mateusz says.

“In the medium tests, we leveraged the data from literature and the super interesting non-hamiltonian graphs: the Tutte graph and the Horton graph. However, the tests based on those two graphs, despite their medium size, were very difficult in the context of finding an optimal solution,” adds Mateusz.

Since the challenge has ended, all the prepared datasets are now publicly available and you can find them in the section below.

Datasets and scoring system

Altogether, the 500+ registered teams submitted 8697 solutions and all of them were judged in the development phase of the challenge. In this first phase, every solution was executed using data from four public test cases:

  • two test cases containing data of real flight connections from Kiwi.com,
  • two test cases containing artificial data designed to verify the universality of the algorithm for specific connection networks.

In the second phase, we took the best solution — only around 100 teams were able to score more than 0 points — and assessed it on the basis of ten classified test cases. Two test cases contained data of real flight connections from Kiwi.com, eight test cases contained artificial data designed to verify the universality of the algorithm for specific connection networks. Each set was assessed independently and the final mark is the weighted total score obtained from individual test cases.

The final results

The final results can be found here.

Check the video below to see the recording of the Award Ceremony in Prague.

You can find more details about the evaluation process on this website, just scroll down to Scoring. Please note that you need to be registered and logged in to the challenge platform.

Review of datasets and prices

Below you can review the datasets to see what data your algorithm did well on and where it struggled and, hopefully, learn from your mistakes. Many contestants asked to see the datasets, and they can be downloaded below:

1. Development phase
2. Evaluation phase

Also, you can see the prices that are considered by the scoring system as a result worth 100 points for each dataset:

1. Development phase

1.in — 1396
6.in — 2159
11.in — 44153
14.in — 118811

2. Evaluation phase

2.in — 1498
3.in — 7672
4.in — 14024
5.in — 698
7.in — 31681
8.in — 4052
9.in — 76372
10.in — 21167
12.in — 65447
13.in — 97859

Four questions for the winning team

The three winners — Petr VáňaRobert Pěnička, and Vojtěch Vonásek — are two PhD students and a researcher at the Faculty of Electrical Engineering at the Czech Technical University in Prague and this is also where they got together as a team.

A day before the event started I came across a short article about it and I immediately thought to myself that we have a chance since we are dealing with similar problems as a part of our studies. The next day I wrote to Petr, we teamed up with Vojtěch and this was it,” explains Robert about how they formed the team. His initial thought was right: the first solutions they submitted hit 500+ points.

How many solutions did you submit before ending up first?

Right during the first day of the challenge, we programmed a loading input in C++ and submitted a functional solution the same day. It was an elementary search based on the Breadth-first search (BFS) algorithm. There was only one major modification: we had to limit the number of expanded nodes in each tree depth and set up the preferences to the expansion of those with lower ticket prices. Surprisingly, it got 528 points and it got us to the first place where we stayed a surprisingly long time.

What were your next steps?

Since the datasets were publicly available and we knew the evaluation process, the following development was done locally. We shared the code and other instances via git and we wrote a script that helped us reveal the score in the system. It was easier and more effective than submitting every minor change in our program. Moreover, it helped us optimise particular parts of the program.

Your solution scored 640 points and you submitted it a day before the deadline, however, had you solved it some time before that?

Yes, we had. According to the logs from our git repository we’d already got over 600 points on 12 October which was only three days after the start of the challenge; however, we didn’t want to submit it to make sure that the final version would be general enough and it wouldn’t have too many mistakes. After all, the last “bug” was discovered by Robert just before the final submission and it was related to the possibility of ending your trip somewhere else than it began.

Petr, Robert and Vojtěch taking over their prize

Can you briefly introduce your solution and say why you think your algorithm was the best?

If we’ve got the right information, last year most people used local optimisation (such as Simulated annealing and 2-OPT operator). When we went through the assignment and public instances, we realised that the input data can be quite sparse and that methods of local optimisation might not work very well.

That’s why we opted for a classical method of state space (tree) search driven by heuristics. However, in the end, it turned out that the problem is combinatorially so complex that even though we tried different methods, a modified BFS was the best choice.

After loading the problem, we divided the remaining time into equal segments according to the number of areas, and in each time segment, we expanded as many flights as possible for a given day with a heuristic rating priority. Thus, the route plan was constructed gradually from the beginning to the end. For better guidance of the search, we combined two different heuristics: the first one was trying to visit the cities with the most expensive flights to the destination first, and the second one was estimating the costs of going to cities that hadn’t been visited yet.

Applying the winning algorithm to Kiwi.com data

As promised in the announcement of the challenge, we applied the winning algorithm to our real-life data and searched for a trip that would combine visiting all the world continents. The final cost of such a trip would be €2,587 and the guys would travel from Europe to Africa, Oceania, Asia, South America, North America and back to Europe.

However, Petr, Robert and Vojtěch are getting Kiwi.com credits they can use as they wish. So far, they’ve been planning honeymoons, bachelor parties and visiting friends worldwide.

Sometimes you win, sometimes you lose

As the winners of this year’s challenge already mentioned, last year’s solution was mostly based on local optimisation, such as simulated annealing also used by Petr Lavička who won the Travelling Salesman Challenge last year. Below, you can read his story about why he won last year but failed this time around.

Petr’s story

Last year, I used the simulated annealing algorithm that is really simple. The main idea is that you start with a random permutation of cities and then you just randomly swap the cities in the path. If the new path is better than the previous one, you always accept it. If the new one is worse, you can decide whether you accept it or not based on how far you are in the computation. The decision is based on a parameter called temperature, which must be handcrafted for a specific problem. As you continue in the calculation, the temperature decreases and, in the end, it is so low that you only accept the better solutions and find a local or global minimum. The whole algorithm was performance optimised to be able to do as many iterations as possible. That’s it — the winning solution from last year.

This year, the assignment was a bit more complicated since there were areas of cities instead of cities only. The modification of my algorithm was simple, the path didn’t consist of cities but of areas and I also added one operation that swapped cities in a chosen area next to the area exchange operation. In the last step, I tuned the code and it was 40% faster than the last time when it comes to the number of iterations.

However, it didn’t work and here are the three main reasons:

  1. The algorithm is based on probability: you need to generate lots of random numbers and there is a possibility that you hit “the wrong numbers”. Last time, there were 4 CPU cores available so I ran it four times and chose the best solution. This year, we had only one core.
  2. The differences between the Sphere Engine platform and the Docker that was used last year. In my opinion, the Sphere Engine may be good for challenges but not for those where you need to optimise. I needed to see an output of a program to be able to tune up the parameters, like the temperature, and measure the number of iterations I was able to do, which wasn’t possible this time.
  3. However, the last and main reason was that I noticed too late that some datasets were really sparse! When you randomly choose flight tickets in a set that is sparse, you often hit an invalid path. My algorithm worked in such conditions but it would need much more time to find a solution that could compete with more successful ones.

In summary

We’re already thinking about how to challenge you next time and if you haven’t subscribed to our newsletter, do it now so as not to miss the next competition and other events we organise.

And you can always work on real-life challenges with us, check out the open positions and join our team!

Search
Share
Featured articles
Don’t Fix Bad Data, Do This Instead
The Relevance Of Tech Conferences In A Post Pandemic World