Spaces:
Running
Running
Upload folder using huggingface_hub
Browse files- .github/workflows/update_space.yml +28 -0
- .gitignore +64 -0
- LICENSE +21 -0
- README.md +90 -7
- app.py +141 -0
- requirements.txt +3 -0
- run.sh +24 -0
.github/workflows/update_space.yml
ADDED
@@ -0,0 +1,28 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
name: Run Python script
|
2 |
+
|
3 |
+
on:
|
4 |
+
push:
|
5 |
+
branches:
|
6 |
+
- main
|
7 |
+
|
8 |
+
jobs:
|
9 |
+
build:
|
10 |
+
runs-on: ubuntu-latest
|
11 |
+
|
12 |
+
steps:
|
13 |
+
- name: Checkout
|
14 |
+
uses: actions/checkout@v2
|
15 |
+
|
16 |
+
- name: Set up Python
|
17 |
+
uses: actions/setup-python@v2
|
18 |
+
with:
|
19 |
+
python-version: '3.9'
|
20 |
+
|
21 |
+
- name: Install Gradio
|
22 |
+
run: python -m pip install gradio
|
23 |
+
|
24 |
+
- name: Log in to Hugging Face
|
25 |
+
run: python -c 'import huggingface_hub; huggingface_hub.login(token="${{ secrets.hf_token }}")'
|
26 |
+
|
27 |
+
- name: Deploy to Spaces
|
28 |
+
run: gradio deploy
|
.gitignore
ADDED
@@ -0,0 +1,64 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Python
|
2 |
+
__pycache__/
|
3 |
+
*.py[cod]
|
4 |
+
*$py.class
|
5 |
+
*.so
|
6 |
+
.Python
|
7 |
+
build/
|
8 |
+
develop-eggs/
|
9 |
+
dist/
|
10 |
+
downloads/
|
11 |
+
eggs/
|
12 |
+
.eggs/
|
13 |
+
lib/
|
14 |
+
lib64/
|
15 |
+
parts/
|
16 |
+
sdist/
|
17 |
+
var/
|
18 |
+
wheels/
|
19 |
+
*.egg-info/
|
20 |
+
.installed.cfg
|
21 |
+
*.egg
|
22 |
+
|
23 |
+
# Virtual Environment
|
24 |
+
venv/
|
25 |
+
ENV/
|
26 |
+
env/
|
27 |
+
.env
|
28 |
+
|
29 |
+
# IDE
|
30 |
+
.idea/
|
31 |
+
.vscode/
|
32 |
+
*.swp
|
33 |
+
*.swo
|
34 |
+
|
35 |
+
# Jupyter Notebook
|
36 |
+
.ipynb_checkpoints
|
37 |
+
*.ipynb
|
38 |
+
|
39 |
+
# Model files and cache
|
40 |
+
*.h5
|
41 |
+
*.keras
|
42 |
+
*.joblib
|
43 |
+
*.pkl
|
44 |
+
.cache/
|
45 |
+
.pytest_cache/
|
46 |
+
.coverage
|
47 |
+
htmlcov/
|
48 |
+
.transformers/
|
49 |
+
cached_*
|
50 |
+
|
51 |
+
# Gradio
|
52 |
+
flagged/
|
53 |
+
gradio_cached_examples/
|
54 |
+
.gradio/
|
55 |
+
*.pem
|
56 |
+
|
57 |
+
# Logs
|
58 |
+
*.log
|
59 |
+
logs/
|
60 |
+
lightning_logs/
|
61 |
+
|
62 |
+
# OS
|
63 |
+
.DS_Store
|
64 |
+
Thumbs.db
|
LICENSE
ADDED
@@ -0,0 +1,21 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
MIT License
|
2 |
+
|
3 |
+
Copyright (c) 2025 Luciano Ayres
|
4 |
+
|
5 |
+
Permission is hereby granted, free of charge, to any person obtaining a copy
|
6 |
+
of this software and associated documentation files (the "Software"), to deal
|
7 |
+
in the Software without restriction, including without limitation the rights
|
8 |
+
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
9 |
+
copies of the Software, and to permit persons to whom the Software is
|
10 |
+
furnished to do so, subject to the following conditions:
|
11 |
+
|
12 |
+
The above copyright notice and this permission notice shall be included in all
|
13 |
+
copies or substantial portions of the Software.
|
14 |
+
|
15 |
+
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
16 |
+
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
17 |
+
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
18 |
+
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
19 |
+
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
20 |
+
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
21 |
+
SOFTWARE.
|
README.md
CHANGED
@@ -1,12 +1,95 @@
|
|
1 |
---
|
2 |
-
title:
|
3 |
-
|
4 |
-
colorFrom: pink
|
5 |
-
colorTo: purple
|
6 |
sdk: gradio
|
7 |
sdk_version: 5.25.0
|
8 |
-
app_file: app.py
|
9 |
-
pinned: false
|
10 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
11 |
|
12 |
-
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
+
title: sentiment-analysis
|
3 |
+
app_file: app.py
|
|
|
|
|
4 |
sdk: gradio
|
5 |
sdk_version: 5.25.0
|
|
|
|
|
6 |
---
|
7 |
+
# 🎯 Análise de Sentimento em Avaliações de Produtos
|
8 |
+
|
9 |
+
Este sistema analisa o sentimento em avaliações de produtos em português usando o modelo BERT com fine-tuning em dados do e-commerce brasileiro.
|
10 |
+
|
11 |
+
## 🤖 Modelo
|
12 |
+
Utiliza o modelo [BERT fine-tuned para análise de sentimentos](https://huggingface.co/layers2024/bert-sentiment), treinado com o dataset [Olist Store](https://www.kaggle.com/datasets/olistbr/brazilian-ecommerce/data), um conjunto público de mais de 100 mil avaliações de e-commerce brasileiro feitas entre 2016 e 2018.
|
13 |
+
|
14 |
+
## 🎯 Projeto
|
15 |
+
Desenvolvido como parte do projeto NLP-Sentinel por [Luciano Ayres](https://linkedin.com/in/lucianoayres).
|
16 |
+
|
17 |
+
## 💻 Instalação Local
|
18 |
+
|
19 |
+
### Pré-requisitos
|
20 |
+
- Python 3.10+
|
21 |
+
- Git (opcional)
|
22 |
+
|
23 |
+
### Instalação
|
24 |
+
|
25 |
+
1. Clone o repositório:
|
26 |
+
```bash
|
27 |
+
git clone [email protected]:lucianoayres/sentiment-analysis-app.git
|
28 |
+
cd sentiment-analysis-app
|
29 |
+
```
|
30 |
+
|
31 |
+
2. Execute o script de instalação e inicialização:
|
32 |
+
```bash
|
33 |
+
./run.sh
|
34 |
+
```
|
35 |
+
|
36 |
+
O script irá:
|
37 |
+
- Criar um ambiente virtual Python
|
38 |
+
- Instalar as dependências necessárias
|
39 |
+
- Iniciar a aplicação
|
40 |
+
|
41 |
+
## 🌐 Demo Online
|
42 |
+
|
43 |
+
Você pode acessar uma versão online da aplicação em:
|
44 |
+
[https://huggingface.co/spaces/layers2024/analise-de-sentimentos-avaliacao-de-produtos](https://huggingface.co/spaces/layers2024/analise-de-sentimentos-avaliacao-de-produtos)
|
45 |
+
|
46 |
+
Gradio will:
|
47 |
+
- Start a local server (usually accessible at [http://localhost:7860](http://localhost:7860))
|
48 |
+
- Print a shareable public URL (if `share=True` is set) so that you can try your app in your browser.
|
49 |
+
|
50 |
+
## Deploying Your Gradio App to Hugging Face Spaces
|
51 |
+
|
52 |
+
Hugging Face Spaces provides a free and permanent hosting option for your Gradio demo. Follow the steps below to deploy your app using the terminal method:
|
53 |
+
|
54 |
+
### 1. Ensure You Have a Hugging Face Account
|
55 |
+
Make sure you have a free Hugging Face account. If not, [create one here](https://huggingface.co/join).
|
56 |
+
|
57 |
+
### 2. Deploy via Terminal
|
58 |
+
|
59 |
+
From your app's directory (where your `app.py` and `requirements.txt` reside), simply run:
|
60 |
+
|
61 |
+
```bash
|
62 |
+
gradio deploy
|
63 |
+
```
|
64 |
+
|
65 |
+
This command will gather basic metadata from your project, automatically create a new Space for you, and deploy your Gradio app.
|
66 |
+
- **To Update Your Space:** Re-run the `gradio deploy` command, or you can enable GitHub Actions to automatically update your Space on git push.
|
67 |
+
|
68 |
+
### 3. Access and Share Your App
|
69 |
+
|
70 |
+
Once deployed, your app will be live at a URL in the following format:
|
71 |
+
|
72 |
+
```
|
73 |
+
https://<your-username>-<your-space-name>.hf.space
|
74 |
+
```
|
75 |
+
|
76 |
+
Share this URL with others to allow them to interact with your Gradio demo directly from their browsers.
|
77 |
+
|
78 |
+
## Additional Information
|
79 |
+
|
80 |
+
- **Model Updates:** If you update your model on Hugging Face, the next time your app runs (locally or on Spaces), it will load the latest version.
|
81 |
+
- **Hot Reload (Local Development):** Instead of running `python app.py`, you can run:
|
82 |
+
```bash
|
83 |
+
gradio app.py
|
84 |
+
```
|
85 |
+
This enables hot reloading so your changes are automatically reflected in your demo.
|
86 |
+
- **Troubleshooting:**
|
87 |
+
- Ensure your virtual environment is activated before installing dependencies and running your script.
|
88 |
+
- Verify that the package versions in your `requirements.txt` file are compatible with your code.
|
89 |
+
- The initial launch might take extra time as your model files download from Hugging Face.
|
90 |
|
91 |
+
For further details, please refer to the [Gradio Documentation](https://gradio.app/docs/) and the [Hugging Face Transformers Documentation](https://huggingface.co/docs/transformers).
|
92 |
+
=======
|
93 |
+
# sentiment-analysis-app
|
94 |
+
App de anális de sentimento em avaliações de produtos em português usando BERT com fine-tuning em dados do e-commerce brasileiro.
|
95 |
+
>>>>>>> 42cb5fa7402ec14e53cdffc7568dcf02fc9750fe
|
app.py
ADDED
@@ -0,0 +1,141 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
import gradio as gr
|
2 |
+
from transformers import AutoTokenizer, TFAutoModelForSequenceClassification
|
3 |
+
import tensorflow as tf
|
4 |
+
|
5 |
+
# Load BERT model
|
6 |
+
def load_models():
|
7 |
+
models = {}
|
8 |
+
try:
|
9 |
+
models['BERT'] = {
|
10 |
+
'model': TFAutoModelForSequenceClassification.from_pretrained("layers2024/bert-sentiment"),
|
11 |
+
'tokenizer': AutoTokenizer.from_pretrained("neuralmind/bert-base-portuguese-cased")
|
12 |
+
}
|
13 |
+
print("✓ BERT loaded")
|
14 |
+
except Exception as e:
|
15 |
+
print(f"⚠️ Error loading BERT model: {str(e)}")
|
16 |
+
|
17 |
+
return models
|
18 |
+
|
19 |
+
# Exemplos de avaliações para teste
|
20 |
+
EXAMPLES = [
|
21 |
+
# Avaliações Positivas (nota 5)
|
22 |
+
"Produto excelente! Entrega rápida e qualidade ótima. Recomendo!",
|
23 |
+
"Ótimo atendimento, produto bem embalado e conforme descrito.",
|
24 |
+
"Entrega antes do prazo. Produto de qualidade e preço justo.",
|
25 |
+
"Muito satisfeito! Produto original e entrega rápida.",
|
26 |
+
"Produto perfeito, bem embalado. Vendedor confiável.",
|
27 |
+
|
28 |
+
# Avaliações Negativas (nota 1)
|
29 |
+
"Produto de baixa qualidade. Não recomendo.",
|
30 |
+
"Veio com defeito e sem suporte do vendedor.",
|
31 |
+
"Atrasou muito e veio diferente do anunciado.",
|
32 |
+
"Faltaram itens na entrega. Sem retorno do vendedor.",
|
33 |
+
"Produto muito ruim. Não funciona!"
|
34 |
+
]
|
35 |
+
|
36 |
+
def get_prediction(text, model_dict):
|
37 |
+
inputs = model_dict['tokenizer'](text, return_tensors="tf", truncation=True, padding=True)
|
38 |
+
outputs = model_dict['model'](**inputs)
|
39 |
+
probabilities = tf.nn.softmax(outputs.logits, axis=-1)
|
40 |
+
probs = probabilities.numpy()[0]
|
41 |
+
predicted_class = tf.math.argmax(probabilities, axis=-1).numpy()[0]
|
42 |
+
confidence = probs[predicted_class]
|
43 |
+
return predicted_class, confidence
|
44 |
+
|
45 |
+
def predict_sentiment(review_text):
|
46 |
+
# Definir threshold de confiança
|
47 |
+
HIGH_CONFIDENCE = 0.90 # 90% de confiança mínima para classificação definitiva
|
48 |
+
|
49 |
+
try:
|
50 |
+
predicted_class, confidence = get_prediction(review_text, MODELS['BERT'])
|
51 |
+
|
52 |
+
# Determinar o sentimento baseado no threshold
|
53 |
+
if confidence >= HIGH_CONFIDENCE:
|
54 |
+
sentiment = "POSITIVO" if predicted_class == 1 else "NEGATIVO"
|
55 |
+
status = "✅ Alta confiança"
|
56 |
+
else:
|
57 |
+
if predicted_class == 1:
|
58 |
+
sentiment = "POSSIVELMENTE POSITIVO"
|
59 |
+
else:
|
60 |
+
sentiment = "POSSIVELMENTE NEGATIVO"
|
61 |
+
status = "⚠️ Baixa confiança"
|
62 |
+
|
63 |
+
return f"BERT:\n{sentiment}\n{status}\nConfiança: {confidence:.1%}"
|
64 |
+
|
65 |
+
except Exception as e:
|
66 |
+
return f"⚠️ Erro: {str(e)}"
|
67 |
+
|
68 |
+
# Load all models
|
69 |
+
print("Carregando modelos...")
|
70 |
+
MODELS = load_models()
|
71 |
+
print("Modelos carregados!")
|
72 |
+
|
73 |
+
# Create the Gradio interface
|
74 |
+
with gr.Blocks() as interface:
|
75 |
+
gr.Markdown("# 🎯 Análise de Sentimento em Avaliações de Produtos")
|
76 |
+
|
77 |
+
with gr.Row():
|
78 |
+
with gr.Column(scale=1):
|
79 |
+
input_text = gr.Textbox(
|
80 |
+
label="Texto da Avaliação",
|
81 |
+
placeholder="Digite aqui a avaliação do produto...",
|
82 |
+
lines=4
|
83 |
+
)
|
84 |
+
with gr.Row():
|
85 |
+
clear_btn = gr.Button("Limpar", variant="secondary")
|
86 |
+
analyze_btn = gr.Button("Analisar Sentimento", variant="primary")
|
87 |
+
|
88 |
+
with gr.Column(scale=1):
|
89 |
+
output_text = gr.Textbox(
|
90 |
+
label="Resultados da Análise",
|
91 |
+
lines=4
|
92 |
+
)
|
93 |
+
|
94 |
+
gr.Examples(
|
95 |
+
examples=[[ex] for ex in EXAMPLES],
|
96 |
+
inputs=[input_text],
|
97 |
+
label="Exemplos de Avaliações"
|
98 |
+
)
|
99 |
+
|
100 |
+
with gr.Accordion("ℹ️ Sobre o Projeto", open=False):
|
101 |
+
gr.Markdown('''
|
102 |
+
### Análise de Sentimento com BERT
|
103 |
+
Este sistema analisa o sentimento em avaliações de produtos em português usando BERT com fine-tuning em dados do e-commerce brasileiro.
|
104 |
+
|
105 |
+
#### 🤖 Detalhes Técnicos
|
106 |
+
- **Modelo**: [BERT fine-tuned para análise de sentimentos](https://huggingface.co/layers2024/bert-sentiment)
|
107 |
+
- **Dataset**: [Olist Store](https://www.kaggle.com/datasets/olistbr/brazilian-ecommerce/data) (100k+ avaliações)
|
108 |
+
- **Projeto**: NLP-Sentinel por [Luciano Ayres](https://linkedin.com/in/lucianoayres)
|
109 |
+
''')
|
110 |
+
|
111 |
+
def clear_inputs():
|
112 |
+
return "", ""
|
113 |
+
|
114 |
+
analyze_btn.click(
|
115 |
+
fn=predict_sentiment,
|
116 |
+
inputs=input_text,
|
117 |
+
outputs=output_text
|
118 |
+
)
|
119 |
+
|
120 |
+
clear_btn.click(
|
121 |
+
fn=clear_inputs,
|
122 |
+
outputs=[input_text, output_text]
|
123 |
+
)
|
124 |
+
|
125 |
+
interface.theme = gr.themes.Default()
|
126 |
+
css = """
|
127 |
+
.gradio-container {max-width: 800px; margin: auto;}
|
128 |
+
.footer {display: none;}
|
129 |
+
a {color: #2196F3; text-decoration: none;}
|
130 |
+
a:hover {text-decoration: underline;}
|
131 |
+
.examples {margin-top: 20px; text-align: left;}
|
132 |
+
.examples-table {width: 100%; border-collapse: collapse;}
|
133 |
+
.example-row {cursor: pointer; transition: background 0.2s;}
|
134 |
+
.example-row:hover {background: #f0f0f0;}
|
135 |
+
.textbox {text-align: left;}
|
136 |
+
"""
|
137 |
+
interface.css = css
|
138 |
+
|
139 |
+
# Launch the Gradio app
|
140 |
+
if __name__ == "__main__":
|
141 |
+
interface.launch(share=True)
|
requirements.txt
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
gradio==5.25.0
|
2 |
+
tensorflow
|
3 |
+
transformers
|
run.sh
ADDED
@@ -0,0 +1,24 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
#!/bin/bash
|
2 |
+
|
3 |
+
# Kill any running Python processes
|
4 |
+
pkill -9 python3
|
5 |
+
|
6 |
+
# Wait a moment to ensure ports are freed
|
7 |
+
sleep 1
|
8 |
+
|
9 |
+
# Create virtual environment if it doesn't exist
|
10 |
+
if [ ! -d "venv" ]; then
|
11 |
+
echo "Creating virtual environment..."
|
12 |
+
python3 -m venv venv
|
13 |
+
fi
|
14 |
+
|
15 |
+
# Activate virtual environment
|
16 |
+
source venv/bin/activate
|
17 |
+
|
18 |
+
# Install/update dependencies
|
19 |
+
echo "Installing/updating dependencies..."
|
20 |
+
pip install -r requirements.txt
|
21 |
+
|
22 |
+
# Run the app
|
23 |
+
echo "Starting the application..."
|
24 |
+
python3 app.py
|