boltuix commited on
Commit
e2a90ea
·
verified ·
1 Parent(s): 5719c17

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +8 -8
README.md CHANGED
@@ -46,7 +46,7 @@ library_name: transformers
46
  # 🧠 NeuroBERT — The Brain of Lightweight NLP for Real-World Intelligence 🌍
47
 
48
  [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
49
- [![Model Size](https://img.shields.io/badge/Size-~55MB-blue)](#)
50
  [![Tasks](https://img.shields.io/badge/Tasks-MLM%20%7C%20Intent%20Detection%20%7C%20Text%20Classification%20%7C%20NER-orange)](#)
51
  [![Inference Speed](https://img.shields.io/badge/Blazing%20Fast-Edge%20Devices-green)](#)
52
 
@@ -72,10 +72,10 @@ library_name: transformers
72
 
73
  ## Overview
74
 
75
- `NeuroBERT` is an **advanced lightweight** NLP model derived from **google/bert-base-uncased**, optimized for **real-time inference** on **resource-constrained devices**. With a quantized size of **~55MB** and **~30M parameters**, it delivers powerful contextual language understanding for real-world applications in environments like mobile apps, wearables, microcontrollers, and smart home devices. Designed for **low-latency**, **offline operation**, and **real-world intelligence**, it’s ideal for privacy-first applications requiring robust intent detection, classification, and semantic understanding with limited connectivity.
76
 
77
  - **Model Name**: NeuroBERT
78
- - **Size**: ~55MB (quantized)
79
  - **Parameters**: ~30M
80
  - **Architecture**: Advanced BERT (8 layers, hidden size 256, 4 attention heads)
81
  - **Description**: Advanced 8-layer, 256-hidden
@@ -83,7 +83,7 @@ library_name: transformers
83
 
84
  ## Key Features
85
 
86
- - ⚡ **Lightweight Powerhouse**: ~55MB footprint fits devices with constrained storage while offering advanced NLP capabilities.
87
  - 🧠 **Deep Contextual Understanding**: Captures complex semantic relationships with an 8-layer architecture.
88
  - 📶 **Offline Capability**: Fully functional without internet access.
89
  - ⚙️ **Real-Time Inference**: Optimized for CPUs, mobile NPUs, and microcontrollers.
@@ -97,13 +97,13 @@ Install the required dependencies:
97
  pip install transformers torch
98
  ```
99
 
100
- Ensure your environment supports Python 3.6+ and has ~55MB of storage for model weights.
101
 
102
  ## Download Instructions
103
 
104
  1. **Via Hugging Face**:
105
  - Access the model at [boltuix/NeuroBERT](https://huggingface.co/boltuix/NeuroBERT).
106
- - Download the model files (~55MB) or clone the repository:
107
  ```bash
108
  git clone https://huggingface.co/boltuix/NeuroBERT
109
  ```
@@ -293,7 +293,7 @@ NeuroBERT is designed for **real-world intelligence** in **edge and IoT scenario
293
  ## Hardware Requirements
294
 
295
  - **Processors**: CPUs, mobile NPUs, or microcontrollers (e.g., Raspberry Pi, ESP32-S3)
296
- - **Storage**: ~55MB for model weights (quantized for reduced footprint)
297
  - **Memory**: ~120MB RAM for inference
298
  - **Environment**: Offline or low-connectivity settings
299
 
@@ -391,7 +391,7 @@ To adapt NeuroBERT for custom IoT tasks (e.g., specific smart home commands):
391
 
392
  | Model | Parameters | Size | Edge/IoT Focus | Tasks Supported |
393
  |-----------------|------------|--------|----------------|-------------------------|
394
- | NeuroBERT | ~30M | ~55MB | High | MLM, NER, Classification |
395
  | NeuroBERT-Small | ~20M | ~50MB | High | MLM, NER, Classification |
396
  | NeuroBERT-Mini | ~7M | ~35MB | High | MLM, NER, Classification |
397
  | NeuroBERT-Tiny | ~4M | ~15MB | High | MLM, NER, Classification |
 
46
  # 🧠 NeuroBERT — The Brain of Lightweight NLP for Real-World Intelligence 🌍
47
 
48
  [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
49
+ [![Model Size](https://img.shields.io/badge/Size-~57MB-blue)](#)
50
  [![Tasks](https://img.shields.io/badge/Tasks-MLM%20%7C%20Intent%20Detection%20%7C%20Text%20Classification%20%7C%20NER-orange)](#)
51
  [![Inference Speed](https://img.shields.io/badge/Blazing%20Fast-Edge%20Devices-green)](#)
52
 
 
72
 
73
  ## Overview
74
 
75
+ `NeuroBERT` is an **advanced lightweight** NLP model derived from **google/bert-base-uncased**, optimized for **real-time inference** on **resource-constrained devices**. With a quantized size of **~57MB** and **~30M parameters**, it delivers powerful contextual language understanding for real-world applications in environments like mobile apps, wearables, microcontrollers, and smart home devices. Designed for **low-latency**, **offline operation**, and **real-world intelligence**, it’s ideal for privacy-first applications requiring robust intent detection, classification, and semantic understanding with limited connectivity.
76
 
77
  - **Model Name**: NeuroBERT
78
+ - **Size**: ~57MB (quantized)
79
  - **Parameters**: ~30M
80
  - **Architecture**: Advanced BERT (8 layers, hidden size 256, 4 attention heads)
81
  - **Description**: Advanced 8-layer, 256-hidden
 
83
 
84
  ## Key Features
85
 
86
+ - ⚡ **Lightweight Powerhouse**: ~57MB footprint fits devices with constrained storage while offering advanced NLP capabilities.
87
  - 🧠 **Deep Contextual Understanding**: Captures complex semantic relationships with an 8-layer architecture.
88
  - 📶 **Offline Capability**: Fully functional without internet access.
89
  - ⚙️ **Real-Time Inference**: Optimized for CPUs, mobile NPUs, and microcontrollers.
 
97
  pip install transformers torch
98
  ```
99
 
100
+ Ensure your environment supports Python 3.6+ and has ~57MB of storage for model weights.
101
 
102
  ## Download Instructions
103
 
104
  1. **Via Hugging Face**:
105
  - Access the model at [boltuix/NeuroBERT](https://huggingface.co/boltuix/NeuroBERT).
106
+ - Download the model files (~57MB) or clone the repository:
107
  ```bash
108
  git clone https://huggingface.co/boltuix/NeuroBERT
109
  ```
 
293
  ## Hardware Requirements
294
 
295
  - **Processors**: CPUs, mobile NPUs, or microcontrollers (e.g., Raspberry Pi, ESP32-S3)
296
+ - **Storage**: ~57MB for model weights (quantized for reduced footprint)
297
  - **Memory**: ~120MB RAM for inference
298
  - **Environment**: Offline or low-connectivity settings
299
 
 
391
 
392
  | Model | Parameters | Size | Edge/IoT Focus | Tasks Supported |
393
  |-----------------|------------|--------|----------------|-------------------------|
394
+ | NeuroBERT | ~30M | ~57MB | High | MLM, NER, Classification |
395
  | NeuroBERT-Small | ~20M | ~50MB | High | MLM, NER, Classification |
396
  | NeuroBERT-Mini | ~7M | ~35MB | High | MLM, NER, Classification |
397
  | NeuroBERT-Tiny | ~4M | ~15MB | High | MLM, NER, Classification |