fix with actual model name (#4)
Browse files- fix with actual model name (cb57fa9d84c4c389f7e121cece7c52af992e1787)
Co-authored-by: Venkatachalam Natchiappan <[email protected]>
    	
        README.md
    CHANGED
    
    | @@ -14,8 +14,8 @@ The model can be used for Information Retrieval: Given a query, encode the query | |
| 14 | 
             
            from transformers import AutoTokenizer, AutoModelForSequenceClassification
         | 
| 15 | 
             
            import torch
         | 
| 16 |  | 
| 17 | 
            -
            model = AutoModelForSequenceClassification.from_pretrained(' | 
| 18 | 
            -
            tokenizer = AutoTokenizer.from_pretrained(' | 
| 19 |  | 
| 20 | 
             
            features = tokenizer(['How many people live in Berlin?', 'How many people live in Berlin?'], ['Berlin has a population of 3,520,031 registered inhabitants in an area of 891.82 square kilometers.', 'New York City is famous for the Metropolitan Museum of Art.'],  padding=True, truncation=True, return_tensors="pt")
         | 
| 21 |  | 
| @@ -31,7 +31,7 @@ with torch.no_grad(): | |
| 31 | 
             
            The usage becomes easier when you have [SentenceTransformers](https://www.sbert.net/) installed. Then, you can use the pre-trained models like this:
         | 
| 32 | 
             
            ```python
         | 
| 33 | 
             
            from sentence_transformers import CrossEncoder
         | 
| 34 | 
            -
            model = CrossEncoder(' | 
| 35 | 
             
            scores = model.predict([('Query', 'Paragraph1'), ('Query', 'Paragraph2') , ('Query', 'Paragraph3')])
         | 
| 36 | 
             
            ```
         | 
| 37 |  | 
|  | |
| 14 | 
             
            from transformers import AutoTokenizer, AutoModelForSequenceClassification
         | 
| 15 | 
             
            import torch
         | 
| 16 |  | 
| 17 | 
            +
            model = AutoModelForSequenceClassification.from_pretrained('cross-encoder/ms-marco-MiniLM-L-6-v2')
         | 
| 18 | 
            +
            tokenizer = AutoTokenizer.from_pretrained('cross-encoder/ms-marco-MiniLM-L-6-v2')
         | 
| 19 |  | 
| 20 | 
             
            features = tokenizer(['How many people live in Berlin?', 'How many people live in Berlin?'], ['Berlin has a population of 3,520,031 registered inhabitants in an area of 891.82 square kilometers.', 'New York City is famous for the Metropolitan Museum of Art.'],  padding=True, truncation=True, return_tensors="pt")
         | 
| 21 |  | 
|  | |
| 31 | 
             
            The usage becomes easier when you have [SentenceTransformers](https://www.sbert.net/) installed. Then, you can use the pre-trained models like this:
         | 
| 32 | 
             
            ```python
         | 
| 33 | 
             
            from sentence_transformers import CrossEncoder
         | 
| 34 | 
            +
            model = CrossEncoder('cross-encoder/ms-marco-MiniLM-L-6-v2', max_length=512)
         | 
| 35 | 
             
            scores = model.predict([('Query', 'Paragraph1'), ('Query', 'Paragraph2') , ('Query', 'Paragraph3')])
         | 
| 36 | 
             
            ```
         | 
| 37 |  | 

