text
stringlengths 307
2.02k
| inputs
dict | prediction
null | prediction_agent
null | annotation
stringclasses 2
values | annotation_agent
stringclasses 1
value | vectors
null | multi_label
bool 1
class | explanation
null | id
stringlengths 36
36
| metadata
null | status
stringclasses 2
values | metrics
dict | label
class label 2
classes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
TITLE: SPICE, A Dataset of Drug-like Molecules and Peptides for Training Machine Learning Potentials
ABSTRACT: Machine learning potentials are an important tool for molecular simulation,
but their development is held back by a shortage of high quality datasets to
train them on. We describe the SPICE dataset, a new quantum chemistry dataset
for training potentials relevant to simulating drug-like small molecules
interacting with proteins. It contains over 1.1 million conformations for a
diverse set of small molecules, dimers, dipeptides, and solvated amino acids.
It includes 15 elements, charged and uncharged molecules, and a wide range of
covalent and non-covalent interactions. It provides both forces and energies
calculated at the {\omega}B97M-D3(BJ)/def2-TZVPPD level of theory, along with
other useful quantities such as multipole moments and bond orders. We train a
set of machine learning potentials on it and demonstrate that they can achieve
chemical accuracy across a broad region of chemical space. It can serve as a
valuable resource for the creation of transferable, ready to use potential
functions for use in molecular simulations. | {
"abstract": "Machine learning potentials are an important tool for molecular simulation,\nbut their development is held back by a shortage of high quality datasets to\ntrain them on. We describe the SPICE dataset, a new quantum chemistry dataset\nfor training potentials relevant to simulating drug-like small molecules\ninteracting with proteins. It contains over 1.1 million conformations for a\ndiverse set of small molecules, dimers, dipeptides, and solvated amino acids.\nIt includes 15 elements, charged and uncharged molecules, and a wide range of\ncovalent and non-covalent interactions. It provides both forces and energies\ncalculated at the {\\omega}B97M-D3(BJ)/def2-TZVPPD level of theory, along with\nother useful quantities such as multipole moments and bond orders. We train a\nset of machine learning potentials on it and demonstrate that they can achieve\nchemical accuracy across a broad region of chemical space. It can serve as a\nvaluable resource for the creation of transferable, ready to use potential\nfunctions for use in molecular simulations.",
"title": "SPICE, A Dataset of Drug-like Molecules and Peptides for Training Machine Learning Potentials",
"url": "http://arxiv.org/abs/2209.10702v2"
} | null | null | new_dataset | admin | null | false | null | 1218fc36-d914-47a0-b45d-25c4d61b317d | null | Validated | {
"text_length": 1171
} | 0new_dataset
|
TITLE: LEMMA: A Multi-view Dataset for Learning Multi-agent Multi-task Activities
ABSTRACT: Understanding and interpreting human actions is a long-standing challenge and
a critical indicator of perception in artificial intelligence. However, a few
imperative components of daily human activities are largely missed in prior
literature, including the goal-directed actions, concurrent multi-tasks, and
collaborations among multi-agents. We introduce the LEMMA dataset to provide a
single home to address these missing dimensions with meticulously designed
settings, wherein the number of tasks and agents varies to highlight different
learning objectives. We densely annotate the atomic-actions with human-object
interactions to provide ground-truths of the compositionality, scheduling, and
assignment of daily activities. We further devise challenging compositional
action recognition and action/task anticipation benchmarks with baseline models
to measure the capability of compositional action understanding and temporal
reasoning. We hope this effort would drive the machine vision community to
examine goal-directed human activities and further study the task scheduling
and assignment in the real world. | {
"abstract": "Understanding and interpreting human actions is a long-standing challenge and\na critical indicator of perception in artificial intelligence. However, a few\nimperative components of daily human activities are largely missed in prior\nliterature, including the goal-directed actions, concurrent multi-tasks, and\ncollaborations among multi-agents. We introduce the LEMMA dataset to provide a\nsingle home to address these missing dimensions with meticulously designed\nsettings, wherein the number of tasks and agents varies to highlight different\nlearning objectives. We densely annotate the atomic-actions with human-object\ninteractions to provide ground-truths of the compositionality, scheduling, and\nassignment of daily activities. We further devise challenging compositional\naction recognition and action/task anticipation benchmarks with baseline models\nto measure the capability of compositional action understanding and temporal\nreasoning. We hope this effort would drive the machine vision community to\nexamine goal-directed human activities and further study the task scheduling\nand assignment in the real world.",
"title": "LEMMA: A Multi-view Dataset for Learning Multi-agent Multi-task Activities",
"url": "http://arxiv.org/abs/2007.15781v1"
} | null | null | new_dataset | admin | null | false | null | 238beaef-fde6-4639-afd6-26f57f4322dd | null | Validated | {
"text_length": 1226
} | 0new_dataset
|
TITLE: A Synthetic Dataset for 5G UAV Attacks Based on Observable Network Parameters
ABSTRACT: Synthetic datasets are beneficial for machine learning researchers due to the
possibility of experimenting with new strategies and algorithms in the training
and testing phases. These datasets can easily include more scenarios that might
be costly to research with real data or can complement and, in some cases,
replace real data measurements, depending on the quality of the synthetic data.
They can also solve the unbalanced data problem, avoid overfitting, and can be
used in training while testing can be done with real data. In this paper, we
present, to the best of our knowledge, the first synthetic dataset for Unmanned
Aerial Vehicle (UAV) attacks in 5G and beyond networks based on the following
key observable network parameters that indicate power levels: the Received
Signal Strength Indicator (RSSI) and the Signal to Interference-plus-Noise
Ratio (SINR). The main objective of this data is to enable deep network
development for UAV communication security. Especially, for algorithm
development or the analysis of time-series data applied to UAV attack
recognition. Our proposed dataset provides insights into network functionality
when static or moving UAV attackers target authenticated UAVs in an urban
environment. The dataset also considers the presence and absence of
authenticated terrestrial users in the network, which may decrease the deep
networks ability to identify attacks. Furthermore, the data provides deeper
comprehension of the metrics available in the 5G physical and MAC layers for
machine learning and statistics research. The dataset will available at link
archive-beta.ics.uci.edu | {
"abstract": "Synthetic datasets are beneficial for machine learning researchers due to the\npossibility of experimenting with new strategies and algorithms in the training\nand testing phases. These datasets can easily include more scenarios that might\nbe costly to research with real data or can complement and, in some cases,\nreplace real data measurements, depending on the quality of the synthetic data.\nThey can also solve the unbalanced data problem, avoid overfitting, and can be\nused in training while testing can be done with real data. In this paper, we\npresent, to the best of our knowledge, the first synthetic dataset for Unmanned\nAerial Vehicle (UAV) attacks in 5G and beyond networks based on the following\nkey observable network parameters that indicate power levels: the Received\nSignal Strength Indicator (RSSI) and the Signal to Interference-plus-Noise\nRatio (SINR). The main objective of this data is to enable deep network\ndevelopment for UAV communication security. Especially, for algorithm\ndevelopment or the analysis of time-series data applied to UAV attack\nrecognition. Our proposed dataset provides insights into network functionality\nwhen static or moving UAV attackers target authenticated UAVs in an urban\nenvironment. The dataset also considers the presence and absence of\nauthenticated terrestrial users in the network, which may decrease the deep\nnetworks ability to identify attacks. Furthermore, the data provides deeper\ncomprehension of the metrics available in the 5G physical and MAC layers for\nmachine learning and statistics research. The dataset will available at link\narchive-beta.ics.uci.edu",
"title": "A Synthetic Dataset for 5G UAV Attacks Based on Observable Network Parameters",
"url": "http://arxiv.org/abs/2211.09706v1"
} | null | null | new_dataset | admin | null | false | null | a407ec96-3a6e-432c-88ce-b3caf3cd1e90 | null | Validated | {
"text_length": 1732
} | 0new_dataset
|
TITLE: A Wideband Signal Recognition Dataset
ABSTRACT: Signal recognition is a spectrum sensing problem that jointly requires
detection, localization in time and frequency, and classification. This is a
step beyond most spectrum sensing work which involves signal detection to
estimate "present" or "not present" detections for either a single channel or
fixed sized channels or classification which assumes a signal is present. We
define the signal recognition task, present the metrics of precision and recall
to the RF domain, and review recent machine-learning based approaches to this
problem. We introduce a new dataset that is useful for training neural networks
to perform these tasks and show a training framework to train wideband signal
recognizers. | {
"abstract": "Signal recognition is a spectrum sensing problem that jointly requires\ndetection, localization in time and frequency, and classification. This is a\nstep beyond most spectrum sensing work which involves signal detection to\nestimate \"present\" or \"not present\" detections for either a single channel or\nfixed sized channels or classification which assumes a signal is present. We\ndefine the signal recognition task, present the metrics of precision and recall\nto the RF domain, and review recent machine-learning based approaches to this\nproblem. We introduce a new dataset that is useful for training neural networks\nto perform these tasks and show a training framework to train wideband signal\nrecognizers.",
"title": "A Wideband Signal Recognition Dataset",
"url": "http://arxiv.org/abs/2110.00518v1"
} | null | null | new_dataset | admin | null | false | null | 0880825e-0337-4018-b95d-6e5209e389dc | null | Validated | {
"text_length": 777
} | 0new_dataset
|
TITLE: Deep Learning-based ECG Classification on Raspberry PI using a Tensorflow Lite Model based on PTB-XL Dataset
ABSTRACT: The number of IoT devices in healthcare is expected to rise sharply due to
increased demand since the COVID-19 pandemic. Deep learning and IoT devices are
being employed to monitor body vitals and automate anomaly detection in
clinical and non-clinical settings. Most of the current technology requires the
transmission of raw data to a remote server, which is not efficient for
resource-constrained IoT devices and embedded systems. Additionally, it is
challenging to develop a machine learning model for ECG classification due to
the lack of an extensive open public database. To an extent, to overcome this
challenge PTB-XL dataset has been used. In this work, we have developed machine
learning models to be deployed on Raspberry Pi. We present an evaluation of our
TensorFlow Model with two classification classes. We also present the
evaluation of the corresponding TensorFlow Lite FlatBuffers to demonstrate
their minimal run-time requirements while maintaining acceptable accuracy. | {
"abstract": "The number of IoT devices in healthcare is expected to rise sharply due to\nincreased demand since the COVID-19 pandemic. Deep learning and IoT devices are\nbeing employed to monitor body vitals and automate anomaly detection in\nclinical and non-clinical settings. Most of the current technology requires the\ntransmission of raw data to a remote server, which is not efficient for\nresource-constrained IoT devices and embedded systems. Additionally, it is\nchallenging to develop a machine learning model for ECG classification due to\nthe lack of an extensive open public database. To an extent, to overcome this\nchallenge PTB-XL dataset has been used. In this work, we have developed machine\nlearning models to be deployed on Raspberry Pi. We present an evaluation of our\nTensorFlow Model with two classification classes. We also present the\nevaluation of the corresponding TensorFlow Lite FlatBuffers to demonstrate\ntheir minimal run-time requirements while maintaining acceptable accuracy.",
"title": "Deep Learning-based ECG Classification on Raspberry PI using a Tensorflow Lite Model based on PTB-XL Dataset",
"url": "http://arxiv.org/abs/2209.00989v1"
} | null | null | no_new_dataset | admin | null | false | null | bf432be5-a787-4967-ad96-435304af3be2 | null | Validated | {
"text_length": 1132
} | 1no_new_dataset
|
TITLE: Quantum Transfer Learning for Real-World, Small, and High-Dimensional Datasets
ABSTRACT: Quantum machine learning (QML) networks promise to have some computational
(or quantum) advantage for classifying supervised datasets (e.g., satellite
images) over some conventional deep learning (DL) techniques due to their
expressive power via their local effective dimension. There are, however, two
main challenges regardless of the promised quantum advantage: 1) Currently
available quantum bits (qubits) are very small in number, while real-world
datasets are characterized by hundreds of high-dimensional elements (i.e.,
features). Additionally, there is not a single unified approach for embedding
real-world high-dimensional datasets in a limited number of qubits. 2) Some
real-world datasets are too small for training intricate QML networks. Hence,
to tackle these two challenges for benchmarking and validating QML networks on
real-world, small, and high-dimensional datasets in one-go, we employ quantum
transfer learning composed of a multi-qubit QML network, and a very deep
convolutional network (a with VGG16 architecture) extracting informative
features from any small, high-dimensional dataset. We use real-amplitude and
strongly-entangling N-layer QML networks with and without data re-uploading
layers as a multi-qubit QML network, and evaluate their expressive power
quantified by using their local effective dimension; the lower the local
effective dimension of a QML network, the better its performance on unseen
data. Our numerical results show that the strongly-entangling N-layer QML
network has a lower local effective dimension than the real-amplitude QML
network and outperforms it on the hard-to-classify three-class labelling
problem. In addition, quantum transfer learning helps tackle the two challenges
mentioned above for benchmarking and validating QML networks on real-world,
small, and high-dimensional datasets. | {
"abstract": "Quantum machine learning (QML) networks promise to have some computational\n(or quantum) advantage for classifying supervised datasets (e.g., satellite\nimages) over some conventional deep learning (DL) techniques due to their\nexpressive power via their local effective dimension. There are, however, two\nmain challenges regardless of the promised quantum advantage: 1) Currently\navailable quantum bits (qubits) are very small in number, while real-world\ndatasets are characterized by hundreds of high-dimensional elements (i.e.,\nfeatures). Additionally, there is not a single unified approach for embedding\nreal-world high-dimensional datasets in a limited number of qubits. 2) Some\nreal-world datasets are too small for training intricate QML networks. Hence,\nto tackle these two challenges for benchmarking and validating QML networks on\nreal-world, small, and high-dimensional datasets in one-go, we employ quantum\ntransfer learning composed of a multi-qubit QML network, and a very deep\nconvolutional network (a with VGG16 architecture) extracting informative\nfeatures from any small, high-dimensional dataset. We use real-amplitude and\nstrongly-entangling N-layer QML networks with and without data re-uploading\nlayers as a multi-qubit QML network, and evaluate their expressive power\nquantified by using their local effective dimension; the lower the local\neffective dimension of a QML network, the better its performance on unseen\ndata. Our numerical results show that the strongly-entangling N-layer QML\nnetwork has a lower local effective dimension than the real-amplitude QML\nnetwork and outperforms it on the hard-to-classify three-class labelling\nproblem. In addition, quantum transfer learning helps tackle the two challenges\nmentioned above for benchmarking and validating QML networks on real-world,\nsmall, and high-dimensional datasets.",
"title": "Quantum Transfer Learning for Real-World, Small, and High-Dimensional Datasets",
"url": "http://arxiv.org/abs/2209.07799v4"
} | null | null | no_new_dataset | admin | null | false | null | c751cb1f-cd90-46e8-8490-989b40bf0b76 | null | Validated | {
"text_length": 1964
} | 1no_new_dataset
|
TITLE: Will we run out of data? An analysis of the limits of scaling datasets in Machine Learning
ABSTRACT: We analyze the growth of dataset sizes used in machine learning for natural
language processing and computer vision, and extrapolate these using two
methods; using the historical growth rate and estimating the compute-optimal
dataset size for future predicted compute budgets. We investigate the growth in
data usage by estimating the total stock of unlabeled data available on the
internet over the coming decades. Our analysis indicates that the stock of
high-quality language data will be exhausted soon; likely before 2026. By
contrast, the stock of low-quality language data and image data will be
exhausted only much later; between 2030 and 2050 (for low-quality language) and
between 2030 and 2060 (for images). Our work suggests that the current trend of
ever-growing ML models that rely on enormous datasets might slow down if data
efficiency is not drastically improved or new sources of data become available. | {
"abstract": "We analyze the growth of dataset sizes used in machine learning for natural\nlanguage processing and computer vision, and extrapolate these using two\nmethods; using the historical growth rate and estimating the compute-optimal\ndataset size for future predicted compute budgets. We investigate the growth in\ndata usage by estimating the total stock of unlabeled data available on the\ninternet over the coming decades. Our analysis indicates that the stock of\nhigh-quality language data will be exhausted soon; likely before 2026. By\ncontrast, the stock of low-quality language data and image data will be\nexhausted only much later; between 2030 and 2050 (for low-quality language) and\nbetween 2030 and 2060 (for images). Our work suggests that the current trend of\never-growing ML models that rely on enormous datasets might slow down if data\nefficiency is not drastically improved or new sources of data become available.",
"title": "Will we run out of data? An analysis of the limits of scaling datasets in Machine Learning",
"url": "http://arxiv.org/abs/2211.04325v1"
} | null | null | no_new_dataset | admin | null | false | null | 8b03960a-0289-41a1-b6d6-0217646e99bc | null | Validated | {
"text_length": 1045
} | 1no_new_dataset
|