--- license: apache-2.0 datasets: - Bingsu/Gameplay_Images language: - en base_model: - google/siglip2-so400m-patch14-384 pipeline_tag: image-classification library_name: transformers tags: - Gameplay - Classcode - '10' --- ![zdzdf.png](https://cdn-uploads.huggingface.co/production/uploads/65bb837dbfb878f46c77de4c/KJnfq1zAn56dabaX4nuei.png) # **Gameplay-Classcode-10** > **Gameplay-Classcode-10** is a vision-language model fine-tuned from **google/siglip2-base-patch16-224** using the **SiglipForImageClassification** architecture. It classifies gameplay screenshots or thumbnails into one of ten popular video game titles. ```py Classification Report: precision recall f1-score support Among Us 0.9990 0.9920 0.9955 1000 Apex Legends 0.9737 0.9990 0.9862 1000 Fortnite 0.9960 0.9910 0.9935 1000 Forza Horizon 0.9990 0.9820 0.9904 1000 Free Fire 0.9930 0.9860 0.9895 1000 Genshin Impact 0.9831 0.9890 0.9860 1000 God of War 0.9930 0.9930 0.9930 1000 Minecraft 0.9990 0.9990 0.9990 1000 Roblox 0.9832 0.9960 0.9896 1000 Terraria 1.0000 0.9910 0.9955 1000 accuracy 0.9918 10000 macro avg 0.9919 0.9918 0.9918 10000 weighted avg 0.9919 0.9918 0.9918 10000 ``` ![download.png](https://cdn-uploads.huggingface.co/production/uploads/65bb837dbfb878f46c77de4c/mI7DFpu3kJ6V3EOiJII39.png) The model predicts one of the following **game categories**: - **0:** Among Us - **1:** Apex Legends - **2:** Fortnite - **3:** Forza Horizon - **4:** Free Fire - **5:** Genshin Impact - **6:** God of War - **7:** Minecraft - **8:** Roblox - **9:** Terraria --- # **Run with Transformers 🤗** ```python !pip install -q transformers torch pillow gradio ``` ```python import gradio as gr from transformers import AutoImageProcessor, SiglipForImageClassification from PIL import Image import torch # Load model and processor model_name = "prithivMLmods/Gameplay-Classcode-10" # Replace with your actual model path model = SiglipForImageClassification.from_pretrained(model_name) processor = AutoImageProcessor.from_pretrained(model_name) # Label mapping id2label = { 0: "Among Us", 1: "Apex Legends", 2: "Fortnite", 3: "Forza Horizon", 4: "Free Fire", 5: "Genshin Impact", 6: "God of War", 7: "Minecraft", 8: "Roblox", 9: "Terraria" } def classify_game(image): """Predicts the game title based on the gameplay image.""" image = Image.fromarray(image).convert("RGB") inputs = processor(images=image, return_tensors="pt") with torch.no_grad(): outputs = model(**inputs) logits = outputs.logits probs = torch.nn.functional.softmax(logits, dim=1).squeeze().tolist() predictions = {id2label[i]: round(probs[i], 3) for i in range(len(probs))} predictions = dict(sorted(predictions.items(), key=lambda item: item[1], reverse=True)) return predictions # Gradio interface iface = gr.Interface( fn=classify_game, inputs=gr.Image(type="numpy"), outputs=gr.Label(label="Game Prediction Scores"), title="Gameplay-Classcode-10", description="Upload a gameplay screenshot or thumbnail to identify the game title (Among Us, Fortnite, Minecraft, etc.)." ) # Launch the app if __name__ == "__main__": iface.launch() ``` --- # **Intended Use** This model can be used for: - **Automatic tagging of gameplay content for streamers and creators** - **Organizing gaming datasets** - **Enhancing searchability in gameplay video repositories** - **Training AI systems for game-related content moderation or recommendations**