Commit
·
f82bd5f
1
Parent(s):
e3b7c1b
Update README.md
Browse files
README.md
CHANGED
|
@@ -1,4 +1,6 @@
|
|
| 1 |
-
CoreML versions of [laion/CLIP-ViT-H-14-laion2B-s32B-b79K](/laion/CLIP-ViT-H-14-laion2B-s32B-b79K).
|
|
|
|
|
|
|
| 2 |
|
| 3 |
There are separate models for the image and text encoders. Sorry, I don't know how to put them both into one file.
|
| 4 |
|
|
|
|
| 1 |
+
CoreML versions of [laion/CLIP-ViT-H-14-laion2B-s32B-b79K](/laion/CLIP-ViT-H-14-laion2B-s32B-b79K).
|
| 2 |
+
|
| 3 |
+
On my baseline M1 they run about 4x faster than the equivalent pytorch models run on the `mps` device (~6 image embeddings per second vs 1.5 images/sec for torch+mps), and according to `asitop` profiling, using about 3/4 of the energy to do so (6W average vs 8W for torch+mps).
|
| 4 |
|
| 5 |
There are separate models for the image and text encoders. Sorry, I don't know how to put them both into one file.
|
| 6 |
|