This project presents a novel approach to solving the popular web-based word game Wordle, employing a transformer deep learning neural network. The system aims to efficiently and accurately predict the optimal next guess in Wordle by leveraging the power of deep learning to understand linguistic patterns and game state. The developed solver demonstrates the applicability of advanced neural network architectures to combinatorial games with incomplete information, offering a robust and intelligent strategy for Wordle gameplay.
Wordle, a widely acclaimed word-guessing game, challenges players to deduce a five-letter word within six attempts, providing feedback on letter correctness and position after each guess. While various heuristic and algorithmic solvers exist, this project explores the potential of a transformer-based deep learning model to enhance solving efficacy. Transformers, known for their prowess in sequence-to-sequence tasks and natural language processing, are particularly well-suited to model the intricate dependencies within Wordle's letter and word patterns. This work aims to showcase how a deep learning approach can not only achieve high performance in Wordle but also provide a flexible framework for similar word-based puzzles.
The core of this Wordle solver is a transformer deep learning neural network. The methodology involves:
Data Preparation: A comprehensive list of five-letter words (words.txt) serves as the vocabulary for the model. Game states, including past guesses and their corresponding feedback (correct letter and position, correct letter wrong position, incorrect letter), are encoded into a suitable format for the transformer network.
Model Architecture: A transformer architecture is employed, likely consisting of an encoder-decoder structure or a decoder-only model, designed to process sequences of game states and output a probability distribution over possible next words. The model learns to identify optimal guesses by recognizing patterns that lead to faster convergence on the solution.
Training: The network is trained on a large corpus of simulated Wordle games or pre-existing Wordle solutions. The training objective is to minimize the number of guesses required to solve a Wordle puzzle, effectively teaching the model optimal strategic play. The best_model.pt file indicates a pre-trained model is provided.
Inference: During gameplay, the trained model receives the current game state as input and generates the most probable optimal guess. The main.py and play.py scripts handle the game logic, user interaction, and integration with the trained deep learning model.
This project successfully implemented a Wordle solver powered by a transformer deep learning neural network. The integration of advanced deep learning techniques showcases a promising avenue for developing intelligent agents capable of excelling in word-based combinatorial games. The findings suggest that transformer models can effectively learn optimal strategies, offering a robust and adaptable solution for Wordle. Future work could involve exploring larger datasets, refining the model architecture for even greater efficiency, and extending this approach to other similar word puzzles or text-based games.