|
|
|
--- |
|
tags: |
|
- bertopic |
|
library_name: bertopic |
|
pipeline_tag: text-classification |
|
--- |
|
|
|
# saxa3-capstone |
|
|
|
This is a [BERTopic](https://github.com/MaartenGr/BERTopic) model. |
|
BERTopic is a flexible and modular topic modeling framework that allows for the generation of easily interpretable topics from large datasets. |
|
This text-classification model was modeled from The Department of Veterans Affairs Advisory Committee on Women Veterans biennial reports, from |
|
a period of 1996 - 2020. It was specifically generated from recommendations used within each of the reports. |
|
|
|
## Usage |
|
|
|
To use this model, please install BERTopic: |
|
|
|
``` |
|
pip install -U bertopic |
|
``` |
|
|
|
You can use the model as follows: |
|
|
|
```python |
|
from bertopic import BERTopic |
|
topic_model = BERTopic.load("magica1/saxa3-capstone") |
|
|
|
topic_model.get_topic_info() |
|
``` |
|
|
|
## Topic overview |
|
|
|
* Number of topics: 24 |
|
* Number of training documents: 1602 |
|
|
|
|
|
|
|
## Training hyperparameters |
|
|
|
* calculate_probabilities: False |
|
* language: None |
|
* low_memory: False |
|
* min_topic_size: 10 |
|
* n_gram_range: (1, 1) |
|
* nr_topics: None |
|
* seed_topic_list: None |
|
* top_n_words: 10 |
|
* verbose: False |
|
|
|
## Framework versions |
|
|
|
* Numpy: 1.23.5 |
|
* HDBSCAN: 0.8.33 |
|
* UMAP: 0.5.4 |
|
* Pandas: 2.1.2 |
|
* Scikit-Learn: 1.2.2 |
|
* Sentence-transformers: 2.2.2 |
|
* Transformers: 4.35.0 |
|
* Numba: 0.56.4 |
|
* Plotly: 5.15.0 |
|
* Python: 3.10.12 |
|
|