metadata
base_model: Snowflake/snowflake-arctic-embed-m
library_name: sentence-transformers
metrics:
- cosine_accuracy@1
- cosine_accuracy@3
- cosine_accuracy@5
- cosine_accuracy@10
- cosine_precision@1
- cosine_precision@3
- cosine_precision@5
- cosine_precision@10
- cosine_recall@1
- cosine_recall@3
- cosine_recall@5
- cosine_recall@10
- cosine_ndcg@10
- cosine_mrr@10
- cosine_map@100
- dot_accuracy@1
- dot_accuracy@3
- dot_accuracy@5
- dot_accuracy@10
- dot_precision@1
- dot_precision@3
- dot_precision@5
- dot_precision@10
- dot_recall@1
- dot_recall@3
- dot_recall@5
- dot_recall@10
- dot_ndcg@10
- dot_mrr@10
- dot_map@100
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:502
- loss:MatryoshkaLoss
- loss:MultipleNegativesRankingLoss
widget:
- source_sentence: >-
How can the manipulation of prompts, known as "jailbreaking," lead to
harmful recommendations from GAI systems?
sentences:
- >-
but this approach may still produce harmful recommendations in response
to other less-explicit, novel
prompts (also relevant to CBRN Information or Capabilities, Data
Privacy, Information Security, and
Obscene, Degrading and/or Abusive Content). Crafting such prompts
deliberately is known as
“jailbreaking,” or, manipulating prompts to circumvent output controls.
Limitations of GAI systems can be
harmful or dangerous in certain contexts. Studies have observed that
users may disclose mental health
issues in conversations with chatbots – and that users exhibit negative
reactions to unhelpful responses
from these chatbots during situations of distress.
This risk encompasses difficulty controlling creation of and public
exposure to offensive or hateful
language, and denigrating or stereotypical content generated by AI. This
kind of speech may contribute
to downstream harm such as fueling dangerous or violent behaviors. The
spread of denigrating or
stereotypical content can also further exacerbate representational harms
(see Harmful Bias and
Homogenization below).
Trustworthy AI Characteristics: Safe, Secure and Resilient
2.4. Data Privacy
GAI systems raise several risks to privacy. GAI system training requires
large volumes of data, which in
some cases may include personal data. The use of personal data for GAI
training raises risks to widely
- >-
communities and using it to reinforce inequality. Various panelists
suggested that these harms could be
mitigated by ensuring community input at the beginning of the design
process, providing ways to opt out of
these systems and use associated human-driven mechanisms instead,
ensuring timeliness of benefit payments,
and providing clear notice about the use of these systems and clear
explanations of how and what the
technologies are doing. Some panelists suggested that technology should
be used to help people receive
benefits, e.g., by pushing benefits to those in need and ensuring
automated decision-making systems are only
used to provide a positive outcome; technology shouldn't be used to take
supports away from people who need
them.
Panel 6: The Healthcare System. This event explored current and emerging
uses of technology in the
healthcare system and consumer products related to health.
Welcome:
•
Alondra Nelson, Deputy Director for Science and Society, White House
Office of Science and Technology
Policy
•
Patrick Gaspard, President and CEO, Center for American Progress
Moderator: Micky Tripathi, National Coordinator for Health Information
Technology, U.S Department of
Health and Human Services.
Panelists:
•
Mark Schneider, Health Innovation Advisor, ChristianaCare
•
Ziad Obermeyer, Blue Cross of California Distinguished Associate
Professor of Policy and Management,
- |-
have access to a person who can quickly consider and
remedy problems you encounter. You should be able to opt
out from automated systems in favor of a human alternative, where
appropriate. Appropriateness should be determined based on rea
sonable expectations in a given context and with a focus on ensuring
broad accessibility and protecting the public from especially harm
ful impacts. In some cases, a human or other alternative may be re
quired by law. You should have access to timely human consider
ation and remedy by a fallback and escalation process if an automat
ed system fails, it produces an error, or you would like to appeal or
contest its impacts on you. Human consideration and fallback
should be accessible, equitable, effective, maintained, accompanied
by appropriate operator training, and should not impose an unrea
sonable burden on the public. Automated systems with an intended
use within sensitive domains, including, but not limited to, criminal
justice, employment, education, and health, should additionally be
tailored to the purpose, provide meaningful access for oversight,
include training for any people interacting with the system, and in
corporate human consideration for adverse or high-risk decisions.
Reporting that includes a description of these human governance
processes and assessment of their timeliness, accessibility, out
- source_sentence: >-
What are the potential consequences of model collapse in AI systems,
particularly regarding output homogenization?
sentences:
- >-
President ordered the full Federal government to work to root out
inequity, embed fairness in decision-
making processes, and affirmatively advance civil rights, equal
opportunity, and racial justice in America.1 The
President has spoken forcefully about the urgent challenges posed to
democracy today and has regularly called
on people of conscience to act to preserve civil rights—including the
right to privacy, which he has called “the
basis for so many more rights that we have come to take for granted that
are ingrained in the fabric of this
country.”2
To advance President Biden’s vision, the White House Office of Science
and Technology Policy has identified
five principles that should guide the design, use, and deployment of
automated systems to protect the American
public in the age of artificial intelligence. The Blueprint for an AI
Bill of Rights is a guide for a society that
protects all people from these threats—and uses technologies in ways
that reinforce our highest values.
Responding to the experiences of the American public, and informed by
insights from researchers,
technologists, advocates, journalists, and policymakers, this framework
is accompanied by a technical
companion—a handbook for anyone seeking to incorporate these protections
into policy and practice, including
detailed steps toward actualizing these principles in the technological
design process. These principles help
provide guidance whenever automated systems can meaningfully impact the
public’s rights, opportunities,
- >-
Synopsis of Responses to OSTP’s Request for Information on the Use and
Governance of Biometric
Technologies in the Public and Private Sectors. Science and Technology
Policy Institute. Mar. 2022.
https://www.ida.org/-/media/feature/publications/s/sy/synopsis-of-responses-to-request-for
information-on-the-use-and-governance-of-biometric-technologies/ida-document-d-33070.ashx
73
NIST Trustworthy and Responsible AI
NIST AI 600-1
Artificial Intelligence Risk Management
Framework: Generative Artificial
Intelligence Profile
This publication is available free of charge from:
https://doi.org/10.6028/NIST.AI.600-1
NIST Trustworthy and Responsible AI
NIST AI 600-1
Artificial Intelligence Risk Management
Framework: Generative Artificial
Intelligence Profile
This publication is available free of charge from:
https://doi.org/10.6028/NIST.AI.600-1
July 2024
U.S. Department of Commerce
- >-
new model’s outputs. In addition to threatening the robustness of the
model overall, model collapse
could lead to homogenized outputs, including by amplifying any
homogenization from the model used to
generate the synthetic training data.
Trustworthy AI Characteristics: Fair with Harmful Bias Managed, Valid
and Reliable
2.7. Human-AI Configuration
GAI system use can involve varying risks of misconfigurations and poor
interactions between a system
and a human who is interacting with it. Humans bring their unique
perspectives, experiences, or domain-
specific expertise to interactions with AI systems but may not have
detailed knowledge of AI systems and
how they work. As a result, human experts may be unnecessarily “averse”
to GAI systems, and thus
deprive themselves or others of GAI’s beneficial uses.
Conversely, due to the complexity and increasing reliability of GAI
technology, over time, humans may
over-rely on GAI systems or may unjustifiably perceive GAI content to be
of higher quality than that
produced by other sources. This phenomenon is an example of automation
bias, or excessive deference
to automated systems. Automation bias can exacerbate other risks of GAI,
such as risks of confabulation
or risks of bias or homogenization.
- source_sentence: >-
How is sensitive data defined in relation to individual privacy and
potential harm?
sentences:
- >-
recognized voluntary consensus standard for web content and other
information and communications
technology.
NIST has released Special Publication 1270, Towards a Standard for
Identifying and Managing Bias
in Artificial Intelligence.59 The special publication: describes the
stakes and challenges of bias in artificial
intelligence and provides examples of how and why it can chip away at
public trust; identifies three categories
of bias in AI – systemic, statistical, and human – and describes how and
where they contribute to harms; and
describes three broad challenges for mitigating bias – datasets, testing
and evaluation, and human factors – and
introduces preliminary guidance for addressing them. Throughout, the
special publication takes a socio-
technical perspective to identifying and managing AI bias.
29
Algorithmic
Discrimination
Protections
You should be protected from abusive data practices via built-in
protections and you should have agency over how data about
you is used. You should be protected from violations of privacy through
design choices that ensure such protections are included by default,
including
ensuring that data collection conforms to reasonable expectations and
that
only data strictly necessary for the specific context is collected.
Designers, de
velopers, and deployers of automated systems should seek your
permission
and respect your decisions regarding collection, use, access, transfer,
and de
- >-
of this framework. It describes the set of: civil rights, civil
liberties, and privacy, including freedom of speech,
voting, and protections from discrimination, excessive punishment,
unlawful surveillance, and violations of
privacy and other freedoms in both public and private sector contexts;
equal opportunities, including equitable
access to education, housing, credit, employment, and other programs;
or, access to critical resources or
services, such as healthcare, financial services, safety, social
services, non-deceptive information about goods
and services, and government benefits.
10
Applying The Blueprint for an AI Bill of Rights
SENSITIVE DATA: Data and metadata are sensitive if they pertain to an
individual in a sensitive domain
(defined below); are generated by technologies used in a sensitive
domain; can be used to infer data from a
sensitive domain or sensitive data about an individual (such as
disability-related data, genomic data, biometric
data, behavioral data, geolocation data, data related to interaction
with the criminal justice system, relationship
history and legal status such as custody and divorce information, and
home, work, or school environmental
data); or have the reasonable potential to be used in ways that are
likely to expose individuals to meaningful
harm, such as a loss of privacy or financial harm due to identity theft.
Data and metadata generated by or about
- >-
Generated explicit or obscene AI content may include highly realistic
“deepfakes” of real individuals,
including children. The spread of this kind of material can have
downstream negative consequences: in
the context of CSAM, even if the generated images do not resemble
specific individuals, the prevalence
of such images can divert time and resources from efforts to find
real-world victims. Outside of CSAM,
the creation and spread of NCII disproportionately impacts women and
sexual minorities, and can have
subsequent negative consequences including decline in overall mental
health, substance abuse, and
even suicidal thoughts.
Data used for training GAI models may unintentionally include CSAM and
NCII. A recent report noted
that several commonly used GAI training datasets were found to contain
hundreds of known images of
12
CSAM. Even when trained on “clean” data, increasingly capable GAI models
can synthesize or produce
synthetic NCII and CSAM. Websites, mobile apps, and custom-built models
that generate synthetic NCII
have moved from niche internet forums to mainstream, automated, and
scaled online businesses.
Trustworthy AI Characteristics: Fair with Harmful Bias Managed, Safe,
Privacy Enhanced
2.12.
Value Chain and Component Integration
GAI value chains involve many third-party components such as procured
datasets, pre-trained models,
- source_sentence: >-
How might GAI facilitate access to CBRN weapons and relevant knowledge for
malicious actors in the future?
sentences:
- >-
https://doi.org/10.6028/NIST.AI.600-1
July 2024
U.S. Department of Commerce
Gina M. Raimondo, Secretary
National Institute of Standards and Technology
Laurie E. Locascio, NIST Director and Under Secretary of Commerce for
Standards and Technology
About AI at NIST: The National Institute of Standards and Technology
(NIST) develops measurements,
technology, tools, and standards to advance reliable, safe, transparent,
explainable, privacy-enhanced,
and fair artificial intelligence (AI) so that its full commercial and
societal benefits can be realized without
harm to people or the planet. NIST, which has conducted both fundamental
and applied work on AI for
more than a decade, is also helping to fulfill the 2023 Executive Order
on Safe, Secure, and Trustworthy
AI. NIST established the U.S. AI Safety Institute and the companion AI
Safety Institute Consortium to
continue the efforts set in motion by the E.O. to build the science
necessary for safe, secure, and
trustworthy development and use of AI.
Acknowledgments: This report was accomplished with the many helpful
comments and contributions
- >-
the AI lifecycle; or other issues that diminish transparency or
accountability for downstream
users.
2.1. CBRN Information or Capabilities
In the future, GAI may enable malicious actors to more easily access
CBRN weapons and/or relevant
knowledge, information, materials, tools, or technologies that could be
misused to assist in the design,
development, production, or use of CBRN weapons or other dangerous
materials or agents. While
relevant biological and chemical threat knowledge and information is
often publicly accessible, LLMs
could facilitate its analysis or synthesis, particularly by individuals
without formal scientific training or
expertise.
Recent research on this topic found that LLM outputs regarding
biological threat creation and attack
planning provided minimal assistance beyond traditional search engine
queries, suggesting that state-of-
the-art LLMs at the time these studies were conducted do not
substantially increase the operational
likelihood of such an attack. The physical synthesis development,
production, and use of chemical or
biological agents will continue to require both applicable expertise and
supporting materials and
infrastructure. The impact of GAI on chemical or biological agent misuse
will depend on what the key
barriers for malicious actors are (e.g., whether information access is
one such barrier), and how well GAI
can help actors address those barriers.
- >-
played a central role in shaping the Blueprint for an AI Bill of Rights.
The core messages gleaned from these
discussions include that AI has transformative potential to improve
Americans’ lives, and that preventing the
harms of these technologies is both necessary and achievable. The
Appendix includes a full list of public engage-
ments.
4
AI BILL OF RIGHTS
FFECTIVE SYSTEMS
ineffective systems. Automated systems should be
communities, stakeholders, and domain experts to identify
Systems should undergo pre-deployment testing, risk
that demonstrate they are safe and effective based on
including those beyond the intended use, and adherence to
protective measures should include the possibility of not
Automated systems should not be designed with an intent
reasonably foreseeable possibility of endangering your safety or the
safety of your community. They should
stemming from unintended, yet foreseeable, uses or
SECTION TITLE
BLUEPRINT FOR AN
SAFE AND E
You should be protected from unsafe or
developed with consultation from diverse
concerns, risks, and potential impacts of the system.
identification and mitigation, and ongoing monitoring
their intended use, mitigation of unsafe outcomes
domain-specific standards. Outcomes of these
deploying the system or removing a system from use.
or
- source_sentence: >-
What are some key lessons learned from technological diffusion in urban
planning that could inform the integration of AI technologies in
communities?
sentences:
- >-
State University
•
Carl Holshouser, Senior Vice President for Operations and Strategic
Initiatives, TechNet
•
Surya Mattu, Senior Data Engineer and Investigative Data Journalist, The
Markup
•
Mariah Montgomery, National Campaign Director, Partnership for Working
Families
55
APPENDIX
Panelists discussed the benefits of AI-enabled systems and their
potential to build better and more
innovative infrastructure. They individually noted that while AI
technologies may be new, the process of
technological diffusion is not, and that it was critical to have
thoughtful and responsible development and
integration of technology within communities. Some panelists suggested
that the integration of technology
could benefit from examining how technological diffusion has worked in
the realm of urban planning:
lessons learned from successes and failures there include the importance
of balancing ownership rights, use
rights, and community health, safety and welfare, as well ensuring
better representation of all voices,
especially those traditionally marginalized by technological advances.
Some panelists also raised the issue of
power structures – providing examples of how strong transparency
requirements in smart city projects
helped to reshape power and give more voice to those lacking the
financial or political power to effect change.
In discussion of technical and governance interventions that that are
needed to protect against the harms
- >-
any mechanism that allows the recipient to build the necessary
understanding and intuitions to achieve the
stated purpose. Tailoring should be assessed (e.g., via user experience
research).
Tailored to the target of the explanation. Explanations should be
targeted to specific audiences and
clearly state that audience. An explanation provided to the subject of a
decision might differ from one provided
to an advocate, or to a domain expert or decision maker. Tailoring
should be assessed (e.g., via user experience
research).
43
NOTICE &
EXPLANATION
WHAT SHOULD BE EXPECTED OF AUTOMATED SYSTEMS
The expectations for automated systems are meant to serve as a blueprint
for the development of additional
technical standards and practices that are tailored for particular
sectors and contexts.
Tailored to the level of risk. An assessment should be done to determine
the level of risk of the auto
mated system. In settings where the consequences are high as determined
by a risk assessment, or extensive
oversight is expected (e.g., in criminal justice or some public sector
settings), explanatory mechanisms should
be built into the system design so that the system’s full behavior can
be explained in advance (i.e., only fully
transparent models should be used), rather than as an after-the-decision
interpretation. In other settings, the
- >-
research on rigorous and reproducible methodologies for developing
software systems with legal and regulatory
compliance in mind.
Some state legislatures have placed strong transparency and validity
requirements on
the use of pretrial risk assessments. The use of algorithmic pretrial
risk assessments has been a
cause of concern for civil rights groups.28 Idaho Code Section 19-1910,
enacted in 2019,29 requires that any
pretrial risk assessment, before use in the state, first be "shown to be
free of bias against any class of
individuals protected from discrimination by state or federal law", that
any locality using a pretrial risk
assessment must first formally validate the claim of its being free of
bias, that "all documents, records, and
information used to build or validate the risk assessment shall be open
to public inspection," and that assertions
of trade secrets cannot be used "to quash discovery in a criminal matter
by a party to a criminal case."
22
ALGORITHMIC DISCRIMINATION Protections
You should not face discrimination by algorithms
and systems should be used and designed in an
equitable
way.
Algorithmic
discrimination
occurs when
automated systems contribute to unjustified different treatment or
impacts disfavoring people based on their race, color, ethnicity,
sex
model-index:
- name: SentenceTransformer based on Snowflake/snowflake-arctic-embed-m
results:
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: Unknown
type: unknown
metrics:
- type: cosine_accuracy@1
value: 0.75
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.9
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.96
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.97
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.75
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.3
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.19199999999999995
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.09699999999999998
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.75
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.9
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.96
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.97
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.8673712763276756
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.8336111111111113
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.8360959595959596
name: Cosine Map@100
- type: dot_accuracy@1
value: 0.75
name: Dot Accuracy@1
- type: dot_accuracy@3
value: 0.9
name: Dot Accuracy@3
- type: dot_accuracy@5
value: 0.96
name: Dot Accuracy@5
- type: dot_accuracy@10
value: 0.97
name: Dot Accuracy@10
- type: dot_precision@1
value: 0.75
name: Dot Precision@1
- type: dot_precision@3
value: 0.3
name: Dot Precision@3
- type: dot_precision@5
value: 0.19199999999999995
name: Dot Precision@5
- type: dot_precision@10
value: 0.09699999999999998
name: Dot Precision@10
- type: dot_recall@1
value: 0.75
name: Dot Recall@1
- type: dot_recall@3
value: 0.9
name: Dot Recall@3
- type: dot_recall@5
value: 0.96
name: Dot Recall@5
- type: dot_recall@10
value: 0.97
name: Dot Recall@10
- type: dot_ndcg@10
value: 0.8673712763276756
name: Dot Ndcg@10
- type: dot_mrr@10
value: 0.8336111111111113
name: Dot Mrr@10
- type: dot_map@100
value: 0.8360959595959596
name: Dot Map@100
SentenceTransformer based on Snowflake/snowflake-arctic-embed-m
This is a sentence-transformers model finetuned from Snowflake/snowflake-arctic-embed-m. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
Model Details
Model Description
- Model Type: Sentence Transformer
- Base model: Snowflake/snowflake-arctic-embed-m
- Maximum Sequence Length: 512 tokens
- Output Dimensionality: 768 tokens
- Similarity Function: Cosine Similarity
Model Sources
- Documentation: Sentence Transformers Documentation
- Repository: Sentence Transformers on GitHub
- Hugging Face: Sentence Transformers on Hugging Face
Full Model Architecture
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
Usage
Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
pip install -U sentence-transformers
Then you can load this model and run inference.
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("Mdean77/finetuned_arctic")
# Run inference
sentences = [
'What are some key lessons learned from technological diffusion in urban planning that could inform the integration of AI technologies in communities?',
'State University\n•\nCarl Holshouser, Senior Vice President for Operations and Strategic Initiatives, TechNet\n•\nSurya Mattu, Senior Data Engineer and Investigative Data Journalist, The Markup\n•\nMariah Montgomery, National Campaign Director, Partnership for Working Families\n55\n \n \n \n \nAPPENDIX\nPanelists discussed the benefits of AI-enabled systems and their potential to build better and more \ninnovative infrastructure. They individually noted that while AI technologies may be new, the process of \ntechnological diffusion is not, and that it was critical to have thoughtful and responsible development and \nintegration of technology within communities. Some panelists suggested that the integration of technology \ncould benefit from examining how technological diffusion has worked in the realm of urban planning: \nlessons learned from successes and failures there include the importance of balancing ownership rights, use \nrights, and community health, safety and welfare, as well ensuring better representation of all voices, \nespecially those traditionally marginalized by technological advances. Some panelists also raised the issue of \npower structures – providing examples of how strong transparency requirements in smart city projects \nhelped to reshape power and give more voice to those lacking the financial or political power to effect change. \nIn discussion of technical and governance interventions that that are needed to protect against the harms',
'any mechanism that allows the recipient to build the necessary understanding and intuitions to achieve the \nstated purpose. Tailoring should be assessed (e.g., via user experience research). \nTailored to the target of the explanation. Explanations should be targeted to specific audiences and \nclearly state that audience. An explanation provided to the subject of a decision might differ from one provided \nto an advocate, or to a domain expert or decision maker. Tailoring should be assessed (e.g., via user experience \nresearch). \n43\n \n \n \n \n \n \nNOTICE & \nEXPLANATION \nWHAT SHOULD BE EXPECTED OF AUTOMATED SYSTEMS\nThe expectations for automated systems are meant to serve as a blueprint for the development of additional \ntechnical standards and practices that are tailored for particular sectors and contexts. \nTailored to the level of risk. An assessment should be done to determine the level of risk of the auto\xad\nmated system. In settings where the consequences are high as determined by a risk assessment, or extensive \noversight is expected (e.g., in criminal justice or some public sector settings), explanatory mechanisms should \nbe built into the system design so that the system’s full behavior can be explained in advance (i.e., only fully \ntransparent models should be used), rather than as an after-the-decision interpretation. In other settings, the',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
Evaluation
Metrics
Information Retrieval
- Evaluated with
InformationRetrievalEvaluator
Metric | Value |
---|---|
cosine_accuracy@1 | 0.75 |
cosine_accuracy@3 | 0.9 |
cosine_accuracy@5 | 0.96 |
cosine_accuracy@10 | 0.97 |
cosine_precision@1 | 0.75 |
cosine_precision@3 | 0.3 |
cosine_precision@5 | 0.192 |
cosine_precision@10 | 0.097 |
cosine_recall@1 | 0.75 |
cosine_recall@3 | 0.9 |
cosine_recall@5 | 0.96 |
cosine_recall@10 | 0.97 |
cosine_ndcg@10 | 0.8674 |
cosine_mrr@10 | 0.8336 |
cosine_map@100 | 0.8361 |
dot_accuracy@1 | 0.75 |
dot_accuracy@3 | 0.9 |
dot_accuracy@5 | 0.96 |
dot_accuracy@10 | 0.97 |
dot_precision@1 | 0.75 |
dot_precision@3 | 0.3 |
dot_precision@5 | 0.192 |
dot_precision@10 | 0.097 |
dot_recall@1 | 0.75 |
dot_recall@3 | 0.9 |
dot_recall@5 | 0.96 |
dot_recall@10 | 0.97 |
dot_ndcg@10 | 0.8674 |
dot_mrr@10 | 0.8336 |
dot_map@100 | 0.8361 |
Training Details
Training Dataset
Unnamed Dataset
- Size: 502 training samples
- Columns:
sentence_0
andsentence_1
- Approximate statistics based on the first 502 samples:
sentence_0 sentence_1 type string string details - min: 2 tokens
- mean: 21.89 tokens
- max: 38 tokens
- min: 158 tokens
- mean: 263.58 tokens
- max: 512 tokens
- Samples:
sentence_0 sentence_1 What is the purpose of the Blueprint for an AI Bill of Rights published by the White House Office of Science and Technology Policy?
BLUEPRINT FOR AN
AI BILL OF
RIGHTS
MAKING AUTOMATED
SYSTEMS WORK FOR
THE AMERICAN PEOPLE
OCTOBER 2022
About this Document
The Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People was
published by the White House Office of Science and Technology Policy in October 2022. This framework was
released one year after OSTP announced the launch of a process to develop “a bill of rights for an AI-powered
world.” Its release follows a year of public engagement to inform this initiative. The framework is available
online at: https://www.whitehouse.gov/ostp/ai-bill-of-rights
About the Office of Science and Technology Policy
The Office of Science and Technology Policy (OSTP) was established by the National Science and Technology
Policy, Organization, and Priorities Act of 1976 to provide the President and others within the Executive Office
of the President with advice on the scientific, engineering, and technological aspects of the economy, nationalWhen was the Office of Science and Technology Policy established, and what is its primary function?
BLUEPRINT FOR AN
AI BILL OF
RIGHTS
MAKING AUTOMATED
SYSTEMS WORK FOR
THE AMERICAN PEOPLE
OCTOBER 2022
About this Document
The Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People was
published by the White House Office of Science and Technology Policy in October 2022. This framework was
released one year after OSTP announced the launch of a process to develop “a bill of rights for an AI-powered
world.” Its release follows a year of public engagement to inform this initiative. The framework is available
online at: https://www.whitehouse.gov/ostp/ai-bill-of-rights
About the Office of Science and Technology Policy
The Office of Science and Technology Policy (OSTP) was established by the National Science and Technology
Policy, Organization, and Priorities Act of 1976 to provide the President and others within the Executive Office
of the President with advice on the scientific, engineering, and technological aspects of the economy, nationalWhat is the primary purpose of the Policy, Organization, and Priorities Act of 1976 as it relates to the Executive Office of the President?
Policy, Organization, and Priorities Act of 1976 to provide the President and others within the Executive Office
of the President with advice on the scientific, engineering, and technological aspects of the economy, national
security, health, foreign relations, the environment, and the technological recovery and use of resources, among
other topics. OSTP leads interagency science and technology policy coordination efforts, assists the Office of
Management and Budget (OMB) with an annual review and analysis of Federal research and development in
budgets, and serves as a source of scientific and technological analysis and judgment for the President with
respect to major policies, plans, and programs of the Federal Government.
Legal Disclaimer
The Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People is a white paper
published by the White House Office of Science and Technology Policy. It is intended to support the
development of policies and practices that protect civil rights and promote democratic values in the building,
deployment, and governance of automated systems.
The Blueprint for an AI Bill of Rights is non-binding and does not constitute U.S. government policy. It
does not supersede, modify, or direct an interpretation of any existing statute, regulation, policy, or
international instrument. It does not constitute binding guidance for the public or Federal agencies and - Loss:
MatryoshkaLoss
with these parameters:{ "loss": "MultipleNegativesRankingLoss", "matryoshka_dims": [ 768, 512, 256, 128, 64 ], "matryoshka_weights": [ 1, 1, 1, 1, 1 ], "n_dims_per_step": -1 }
Training Hyperparameters
Non-Default Hyperparameters
eval_strategy
: stepsper_device_train_batch_size
: 20per_device_eval_batch_size
: 20num_train_epochs
: 5multi_dataset_batch_sampler
: round_robin
All Hyperparameters
Click to expand
overwrite_output_dir
: Falsedo_predict
: Falseeval_strategy
: stepsprediction_loss_only
: Trueper_device_train_batch_size
: 20per_device_eval_batch_size
: 20per_gpu_train_batch_size
: Noneper_gpu_eval_batch_size
: Nonegradient_accumulation_steps
: 1eval_accumulation_steps
: Nonetorch_empty_cache_steps
: Nonelearning_rate
: 5e-05weight_decay
: 0.0adam_beta1
: 0.9adam_beta2
: 0.999adam_epsilon
: 1e-08max_grad_norm
: 1num_train_epochs
: 5max_steps
: -1lr_scheduler_type
: linearlr_scheduler_kwargs
: {}warmup_ratio
: 0.0warmup_steps
: 0log_level
: passivelog_level_replica
: warninglog_on_each_node
: Truelogging_nan_inf_filter
: Truesave_safetensors
: Truesave_on_each_node
: Falsesave_only_model
: Falserestore_callback_states_from_checkpoint
: Falseno_cuda
: Falseuse_cpu
: Falseuse_mps_device
: Falseseed
: 42data_seed
: Nonejit_mode_eval
: Falseuse_ipex
: Falsebf16
: Falsefp16
: Falsefp16_opt_level
: O1half_precision_backend
: autobf16_full_eval
: Falsefp16_full_eval
: Falsetf32
: Nonelocal_rank
: 0ddp_backend
: Nonetpu_num_cores
: Nonetpu_metrics_debug
: Falsedebug
: []dataloader_drop_last
: Falsedataloader_num_workers
: 0dataloader_prefetch_factor
: Nonepast_index
: -1disable_tqdm
: Falseremove_unused_columns
: Truelabel_names
: Noneload_best_model_at_end
: Falseignore_data_skip
: Falsefsdp
: []fsdp_min_num_params
: 0fsdp_config
: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}fsdp_transformer_layer_cls_to_wrap
: Noneaccelerator_config
: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}deepspeed
: Nonelabel_smoothing_factor
: 0.0optim
: adamw_torchoptim_args
: Noneadafactor
: Falsegroup_by_length
: Falselength_column_name
: lengthddp_find_unused_parameters
: Noneddp_bucket_cap_mb
: Noneddp_broadcast_buffers
: Falsedataloader_pin_memory
: Truedataloader_persistent_workers
: Falseskip_memory_metrics
: Trueuse_legacy_prediction_loop
: Falsepush_to_hub
: Falseresume_from_checkpoint
: Nonehub_model_id
: Nonehub_strategy
: every_savehub_private_repo
: Falsehub_always_push
: Falsegradient_checkpointing
: Falsegradient_checkpointing_kwargs
: Noneinclude_inputs_for_metrics
: Falseeval_do_concat_batches
: Truefp16_backend
: autopush_to_hub_model_id
: Nonepush_to_hub_organization
: Nonemp_parameters
:auto_find_batch_size
: Falsefull_determinism
: Falsetorchdynamo
: Noneray_scope
: lastddp_timeout
: 1800torch_compile
: Falsetorch_compile_backend
: Nonetorch_compile_mode
: Nonedispatch_batches
: Nonesplit_batches
: Noneinclude_tokens_per_second
: Falseinclude_num_input_tokens_seen
: Falseneftune_noise_alpha
: Noneoptim_target_modules
: Nonebatch_eval_metrics
: Falseeval_on_start
: Falseeval_use_gather_object
: Falsebatch_sampler
: batch_samplermulti_dataset_batch_sampler
: round_robin
Training Logs
Epoch | Step | cosine_map@100 |
---|---|---|
1.0 | 26 | 0.7610 |
1.9231 | 50 | 0.8249 |
2.0 | 52 | 0.8317 |
3.0 | 78 | 0.8295 |
3.8462 | 100 | 0.8361 |
Framework Versions
- Python: 3.11.9
- Sentence Transformers: 3.1.1
- Transformers: 4.44.2
- PyTorch: 2.4.1
- Accelerate: 0.34.2
- Datasets: 3.0.0
- Tokenizers: 0.19.1
Citation
BibTeX
Sentence Transformers
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
MatryoshkaLoss
@misc{kusupati2024matryoshka,
title={Matryoshka Representation Learning},
author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
year={2024},
eprint={2205.13147},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
MultipleNegativesRankingLoss
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}