Abstract:The hearing impaired people need to communicate through sign language and other means, but most people with normal hearing cannot read sign language. To solve this problem, this project created a sign language dataset and proposed an improved sign language recognition model based on YOLOv5s. This model uses lightweight network structure MobileNetV3 to replace the backbone network of YOLOv5s object detection algorithm, and achieves good results. The tests prove that the improved model achieves 98.5% mean Average Precision (mAP), 0.92 Recall and 0.929 F1 score in sign language recognition data set. The model proposed in this study not only improves the training speed and reduces the number of parameters, but also improves the accuracy of sign language recognition and meets the actual detection requirements.