The game of Bridge is one of the most challenging and popular card games similarly to Chess and Go, it is often regarded as a mind sport. The AI community has taken interest in developing algorithms that can play bridge. We present a deep neural network to the double dummy problem at Bridge, and go through the design and implementation using TensorFlow.
Double dummy is a simplified version of Bridge where all cards are visible to all players. In this setting a tree-search algorithm  can be used to exactly predict the outcome (i.e the number of tricks taken by each team assuming that every player plays optimally). Although a big simplification, double dummy is still a very important building block in today’s best bridge playing programs, most of which use Monte Carlo  sampling to deal with uncertainty in the game. Sampling a large number of possible layouts of the cards from the hidden hands, solving all sampled layouts double dummy, and choosing the action which has the best outcome on most of the samples was originally proposed in GIB  in 2001, and is still used by most bridge software, including the reigning world champion of computer bridge.
The difficulty with double dummy solvers is that the exhaustive tree-search is expensive, which makes it hard for bridge programs to consider more than 100 samples for making a decision. The small sample sizes have too high variance to enable reliably choosing the optimal action in practice . We trained a deep neural network to predict the outcome of double dummy play, and propose to apply it as an approximate evaluation function used in the sampling approach, as an alternative to the exact solvers that are much slower. Hence, we enable the use of a larger sample, making the choice of the optimal action more reliable.
The talk will describe the architecture and TensorFlow implementation of the proposed neural network, introducing a novel approach of using convolutional layers to extract features from the layout of the cards in a card game. The proposed approach is compared to the state-of-the-art double dummy neural network model published in .
Code examples will be made available on github.
 E. Berlekamp. Program for double-dummy bridge problems a new strategy for mechanical game playing. Journal of the ACM (JACM), 10(3):357–364, 1963  N. Metropolis, S. Ulam. The monte carlo method. Journal of the American statistical association, 44(247):335–341, 1949.  M. Ginsberg. Gib: Imperfect information in a computationally challenging game. Journal of Artificial Intelligence Research, 14:303–358, 2001  V. Ventos, Y. Costel, O. Teytaud, S.T. Ventos Boosting a Bridge Artificial Intelligence. ICTAI, 2017  J. Mandziuk, K. Mossakowski, Neural networks compete with expert human players in solving the Double Dummy Bridge Problem, IEEE Symposium on Computational Intelligence and Games, 2009