VanshK04 commited on
Commit
a02d840
·
verified ·
1 Parent(s): 6a436d3

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +39 -37
README.md CHANGED
@@ -1,24 +1,23 @@
1
  ---
2
- title: Submission Template
3
- emoji: 🔥
4
- colorFrom: yellow
5
- colorTo: green
6
  sdk: docker
7
- pinned: false
8
  ---
9
 
10
-
11
- # Random Baseline Model for Climate Disinformation Classification
12
 
13
  ## Model Description
14
 
15
- This is a random baseline model for the Frugal AI Challenge 2024, specifically for the text classification task of identifying climate disinformation. The model serves as a performance floor, randomly assigning labels to text inputs without any learning.
16
 
17
  ### Intended Use
18
 
19
- - **Primary intended uses**: Baseline comparison for climate disinformation classification models
20
- - **Primary intended users**: Researchers and developers participating in the Frugal AI Challenge
21
- - **Out-of-scope use cases**: Not intended for production use or real-world classification tasks
22
 
23
  ## Training Data
24
 
@@ -28,44 +27,47 @@ The model uses the QuotaClimat/frugalaichallenge-text-train dataset:
28
  - 8 categories of climate disinformation claims
29
 
30
  ### Labels
31
- 0. No relevant claim detected
32
- 1. Global warming is not happening
33
- 2. Not caused by humans
34
- 3. Not bad or beneficial
35
- 4. Solutions harmful/unnecessary
36
- 5. Science is unreliable
37
- 6. Proponents are biased
38
- 7. Fossil fuels are needed
39
 
40
  ## Performance
41
 
42
  ### Metrics
43
- - **Accuracy**: ~12.5% (random chance with 8 classes)
44
- - **Environmental Impact**:
45
- - Emissions tracked in gCO2eq
46
- - Energy consumption tracked in Wh
47
 
48
  ### Model Architecture
49
- The model implements a random choice between the 8 possible labels, serving as the simplest possible baseline.
 
 
 
50
 
51
  ## Environmental Impact
52
 
53
- Environmental impact is tracked using CodeCarbon, measuring:
54
- - Carbon emissions during inference
55
- - Energy consumption during inference
56
 
57
- This tracking helps establish a baseline for the environmental impact of model deployment and inference.
58
 
59
  ## Limitations
60
- - Makes completely random predictions
61
- - No learning or pattern recognition
62
- - No consideration of input text
63
- - Serves only as a baseline reference
64
- - Not suitable for any real-world applications
65
 
66
  ## Ethical Considerations
67
 
68
- - Dataset contains sensitive topics related to climate disinformation
69
- - Model makes random predictions and should not be used for actual classification
70
- - Environmental impact is tracked to promote awareness of AI's carbon footprint
71
- ```
 
 
 
1
  ---
2
+ title: Fine-Tuned BERT Model
3
+ emoji: 🌍
4
+ colorFrom: blue
5
+ colorTo: purple
6
  sdk: docker
7
+ pinned: true
8
  ---
9
 
10
+ # Fine-Tuned BERT Model for Climate Disinformation Classification
 
11
 
12
  ## Model Description
13
 
14
+ This is a fine-tuned BERT model trained for the Frugal AI Challenge 2024. The model has been fine-tuned on the climate disinformation dataset to classify text inputs into 8 distinct categories related to climate disinformation. It leverages BERT's pretrained language understanding capabilities and has been optimized for accuracy in this domain.
15
 
16
  ### Intended Use
17
 
18
+ - **Primary intended uses**: Classifying text inputs to detect specific claims of climate disinformation
19
+ - **Primary intended users**: Researchers, developers, and participants in the Frugal AI Challenge
20
+ - **Out-of-scope use cases**: Not recommended for tasks outside climate disinformation classification or production-level applications without further evaluation
21
 
22
  ## Training Data
23
 
 
27
  - 8 categories of climate disinformation claims
28
 
29
  ### Labels
30
+ 0. No relevant claim detected
31
+ 1. Global warming is not happening
32
+ 2. Not caused by humans
33
+ 3. Not bad or beneficial
34
+ 4. Solutions harmful/unnecessary
35
+ 5. Science is unreliable
36
+ 6. Proponents are biased
37
+ 7. Fossil fuels are needed
38
 
39
  ## Performance
40
 
41
  ### Metrics
42
+ - **Accuracy**: Achieved XX.X% on the test set (replace `XX.X%` with the actual accuracy from your evaluation)
43
+ - **Environmental Impact**:
44
+ - Carbon emissions tracked in gCO2eq
45
+ - Energy consumption tracked in Wh
46
 
47
  ### Model Architecture
48
+ This model fine-tunes the BERT base architecture (`bert-base-uncased`) for the climate disinformation task. The classifier head includes:
49
+ - Dense layers
50
+ - Dropout for regularization
51
+ - Softmax activation for multi-class classification
52
 
53
  ## Environmental Impact
54
 
55
+ Environmental impact is tracked using CodeCarbon, measuring:
56
+ - Carbon emissions during inference and training
57
+ - Energy consumption during inference and training
58
 
59
+ This tracking aligns with the Frugal AI Challenge's commitment to promoting sustainable AI practices.
60
 
61
  ## Limitations
62
+ - Fine-tuned specifically for climate disinformation; performance on other text classification tasks may degrade
63
+ - Requires computational resources (e.g., GPU) for efficient inference
64
+ - Predictions rely on the training dataset's representativeness; may struggle with unseen or out-of-distribution data
 
 
65
 
66
  ## Ethical Considerations
67
 
68
+ - Dataset contains sensitive topics related to climate disinformation
69
+ - Model performance depends on the quality of the dataset and annotation biases
70
+ - Environmental impact during training and inference is disclosed to encourage awareness of AI's carbon footprint
71
+ - Users must validate outputs before using in sensitive or high-stakes applications
72
+
73
+ ---