This project implements an innovative text autocompletion system that generates Shakespearean-style text using Long Short-Term Memory (LSTM) neural networks. The system is deployed as a web application using Streamlit, allowing users to input seed phrases and receive text completions in Shakespeare's distinctive style.
Natural language generation has become increasingly sophisticated with the advent of deep learning. This project demonstrates the application of LSTM networks to generate context-aware text completions in the specific literary style of William Shakespeare, offering both educational and creative value.
LSTM Model
Tokenizer
import tensorflow as tf from tensorflow.keras.models import load_model import pickle # Load pre-trained components tokenizer = pickle.load(open('tokenizer.pkl', 'rb')) model = load_model('sentence_completion.h5')
def autoCompletions(text, model, tokenizer, max_sequence_len): # Convert input text to sequence sequence = tokenizer.texts_to_sequences([text]) # Pad sequence to uniform length sequence = pad_sequences(sequence, maxlen=max_sequence_len-1, padding='pre') # Predict next word predicted_word_index = np.argmax(model.predict(sequence, verbose=0)) # Convert index back to word predicted_word = '' for word, index in tokenizer.word_index.items(): if index == predicted_word_index: predicted_word = word break return text + ' ' + predicted_word
st.title('Shakespeare Text Autocompletion') # User input controls input_text = st.text_input('Enter a starting phrase:') num_sentences = st.slider('Number of Words to generate:', 1, 2, 3) if st.button('Complete Text'): generated_text = input_text for _ in range(num_sentences): # Avoid word repetition generated_words = generated_text.split() if (len(generated_words) > 1 and generated_words[-1] == generated_words[-2]): generated_text += ' ' + autoCompletions( generated_text, model, tokenizer, max_sequence_len=len(generated_words)+1 ) else: generated_text = autoCompletions( generated_text, model, tokenizer, max_sequence_len=len(generated_words)+1 ) st.success(generated_text)
Real-time Text Generation
User Interface
Text Processing
The LSTM model was trained on:
Prediction Speed
Memory Usage
This project demonstrates the successful application of LSTM networks for style-specific text generation, providing an interactive way to explore and generate Shakespearean-style text. The implementation balances accuracy, performance, and user experience.
App Link : https://huggingface.co/spaces/AbhayBhaskar/Shakespeare-Text-Autocompletion
There are no datasets linked
There are no datasets linked