Controlled LED lights using hand gesture, Utilizing Tensorflow, Mediapipe, and Arduino.
In the realm of technology-driven projects, this paper focuses on the development of a hand gesture-based LED control system using a Convolutional Neural Network (CNN) model implemented with TensorFlow. The project aims to create an intuitive interface for controlling LED lights through simple hand gestures, specifically recognizing two gestures: a fist (labelled as 'no') and an open palm (labelled as 'yes').
In conclusion, the hand gesture-based LED control system developed in this project offers a glimpse into the future of interactive and intelligent systems, paving the way for more advanced and user-centric applications in various fields.
Arduino board
LED lights
Python 3.x
Tensorflow
Mediapipe
OpenCV
PyFirmata
Use OpenCV to collect images for training your model.
# dir = 'HandGesture' filepaths = [] labels = [] folds = os.listdir(dir) for fold in folds: foldpath = os.path.join(dir, fold) filelist = os.listdir(foldpath) for file in filelist: fpath = os.path.join(foldpath, file) filepaths.append(fpath) labels.append(fold) Fseries = pd.Series(filepaths, name= 'Gesture') Lseries = pd.Series(labels, name='Labels') df = pd.concat([Fseries, Lseries], axis= 1) mp_hands = mp.solutions.hands hands = mp_hands.Hands(static_image_mode=True, max_num_hands=1, min_detection_confidence=0.5) data = []
Prepare the dataset for training.
# mp_hands = mp.solutions.hands hands = mp_hands.Hands(static_image_mode=True, max_num_hands=1, min_detection_confidence=0.5) data = [] def preprocess_landmarks(landmarks): landmarks = np.array([[lm.x, lm.y, lm.z] for lm in landmarks]).flatten() landmarks = (landmarks - np.mean(landmarks)) / np.std(landmarks) return landmarks
Create and train a neural network for gesture classification.
# model = Sequential([ Dense(128, activation='relu', input_shape=(63,)), Dense(64, activation='relu'), Dense(32, activation='relu'), Dense(2, activation='softmax') ])
Train the model using your preprocessed dataset.
# history = model.fit(X_train, y_train, epochs=30, batch_size=32, validation_data=(X_val, y_val))
Implement a real-time recognition system using Mediapipe.
# while cap.isOpened(): ret, frame = cap.read() if not ret: continue frame_rgb = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB) results = hands.process(frame_rgb) if results.multi_hand_landmarks: for hand_landmarks in results.multi_hand_landmarks: # Draw landmarks on the frame mp_drawing.draw_landmarks(frame, hand_landmarks, mp_hands.HAND_CONNECTIONS) # Preprocess landmarks and predict gesture preprocessed_landmarks = preprocess_landmarks(hand_landmarks.landmark) prediction = model.predict(preprocessed_landmarks) predicted_label = np.argmax(prediction) if predicted_label == 0: # Fist gesture gesture = "fist" led_state = 1 # Turn on LED elif predicted_label == 1: # Palm gesture gesture = "palm" led_state = 0 # Turn off LED # Set LED state on Arduino board.digital[led_pin].write(led_state) # Display gesture on frame cv2.putText(frame, f"Gesture: {gesture}", (10, 30), cv2.FONT_HERSHEY_SIMPLEX, 1, (23, 25, 0), 2) # Display the frame cv2.imshow('Hand Recognition', frame) if cv2.waitKey(1) & 0xFF == ord('q'): # Quit on 'q' break
The LED Control Using Hand Gesture project demonstrates an innovative application of computer vision and machine learning techniques for implementing gesture-based control systems. The gesture-based LED control system is a step towards realizing more interactive and intelligent systems, paving the way for future advancements in human-computer interaction.
There are no datasets linked
There are no datasets linked