_id
stringlengths 36
36
| text
stringlengths 200
328k
| label
stringclasses 5
values |
---|---|---|
eec45d45-3d50-4434-aae7-5f6ca5f56cd5
|
The Portuguese Parliament (PP) comprises a single chamber – known as the Assembleia da República. This is the representative assembly of all Portuguese citizens, also overseeing the life cycles of the laws implemented in the Republic, while ensuring their compliance with the National Constitution. Portugal has records of parliamentary debates since 1821, when the country was still under monarchic ruling, through the First Republic (1911-1926), `New State' (1935-1974), and the current democratic history from 1975 to the present day. These include a number of distinct document series produced by the Portuguese government to keep records of parliamentary activities – each corresponding to the aforementioned historic periods, – and protocols for data recording.
|
i
|
7ac75b8e-dc94-41e0-bc6b-77c517dc2dc8
|
The study of the PP offers challenges both at the political science and computational levels, representing an excellent case study to approach multiparty systems. From its peaceful revolution in 1974, to joining the EU, Portugal has had some very rich 40 years of political history, while maintaining a democratic regime.
Thus, the PP can become a great model for computational political science research, while also allowing for comparative studies, for several reasons. Firstly, it is a relatively small parliament, with the 230 elected Members of Parliament (MPs) representing the country as a whole, not a particular regional constituency and with a limited number of players: Portugal has had less than 1900 MPs since 1976, and more than 200 of those were, at some point, Members of Government (MG). Secondly, it is a multiparty system, with varying levels of polarization, that does not favour voting freedom, but allows for other types of dissent ; Voting against the party line is very costly and infrequent, with divergence happening mostly at the level of discourse. Thirdly, refining existing computational tools, able to detect more subtle differences, will open new possibilities to the study of other types of discourse and settings. Fourthly, because the construction of a political discourse corpus in Portuguese is a novel contribution and it is fundamental to start developing tools in important spoken languages other than English. In fact, in what concerns the debates of the PP, at the start of this project there was a corpus released as a a subset of the Corpus of Reference of Contemporary Portuguese , and community efforts to facilitate access to publicly available data, such as the Demo.cratica website , but not a comprehensive publication of all the diaries, in a (semi-)structured format. In 2019, POPaD was launched with similar goals, but using a different methodology for speaker identification and assignment .
|
i
|
077ec3a8-519a-4e66-b1f3-1d50f5c8a4d3
|
The specific data source we turned into an annotated corpus is the Series I of the `Third Republic', which stores the debate transcripts that took place in the Portuguese National Assembly since 1976, covering the full story of the Portuguese democracy. Thus, this corpus builds on the growing collection of parliamentary corpora at the European level , , , , , , and offers tools for discourse analysis and comparative studies at the national and international levels.
|
i
|
83021485-51c1-4bfd-8c9b-c9a22fb366b7
|
Seminal work of the behavioral approach [1]} for dynamic systems has been receiving growing attention in recent years [2]}, [3]}, [4]}. Instead of using matrices \(A\) , \(B\) , \(C\) and \(D\) for linear dynamics, which are often obtained from the first principle on the target dynamics,
the behavioral approach uses a set of equations to determine if a pair of input and output belongs to the target dynamics. The equations involve Hankel matrices for linear dynamics, which are obtained by input/output trajectories that satisfy a condition of persistently exciting. This approach suits when data are available but applying the first principle for modeling is not straightforward.
Recent advances in this area include [5]}, [6]}, [3]}, [8]}.
|
i
|
738ad6ae-07f9-4e9c-9355-03aa6d4366df
|
Particularly, as given in [1]}, the fundamental result of this approach is that we can determine a future output for a given input using Hankel matrices built by using previously collected input and output data. No identification of system matrices (typically \(A\) , \(B\) , \(C\) , \(D\) ) is necessary. A question that this paper is concerned is the inverse of what is just described: can we determine the input that generated a given output using previously collected input and output data? In other words, the question pertains to finding conditions and methods how we can build an inverse dynamics of the given system using collected data. If this is successfully carried out, an immediate application is to build a disturbance observer [2]}, [3]} with data, that identifies the disturbance that affected the output.
Another application is to find a suitable constrained input that yields an output close to the desired output under constraints.
|
i
|
6e563620-b81b-4bdb-9e6f-04e189ce47fb
|
Similar question has been asked and answered in [1]}, [2]}. However, the result is restricted to the case where \(D\) is invertible. This work does not assume the invertibility of \(D\) . Instead the notion of \(L\) -delay invertibility is invoked from the literature [3]}.
|
i
|
8813e591-212e-4487-89e9-93c62d44e0b9
|
This paper is organized as follows: Section II provides a brief review of the invertibility of discrete-time LTI systems. Section III gives main results. An application of disturbance observer is given in Section IV.
Section V concludes the paper.
|
i
|
c443513f-4c0d-4d9b-89e0-a408fda59ab8
|
A data-based construction of inverse dynamics has been developed for discrete-time LTI systems.
The notion of \(L\) -delay inverse is invoked from the literature, and it is combined with the system description method from behavioral approach.
The outcome is data-based representation of inverse dynamics, which is similar to that of LTI systems, but differs in that input is estimated with some delay.
The result is applied to build disturbance observers from no system information but collected input and output data that satisfy a level of persistency of excitation condition.
Applications and extension this result seem to be immense.
|
d
|
1e71d3ad-d212-47c1-a937-aa45496e4df6
|
We study the evaluation of SQL join queries
in a parallel model of computation that we show to be extremely well-suited
for this task despite the fact that it was designed for a different purpose
and it has not been previously employed in this setting.
We are referring to the vertex-centric flavor [1]}
of Valiant's bulk-synchronous parallel (BSP) model of computation [2]},
originally designed for processing analytic tasks over data modeled as a graph.
|
i
|
73d027a3-bb64-4ade-9d47-ef32372749fb
|
Our solution comprises (i) a graph encoding of relational instances which we call
the Tuple-Attribute Graph (TAG), and (ii) an evaluation algorithm specified as a vertex-centric program running over TAG inputs.
The evaluation is centered around a novel join algorithm
we call TAG-join.
|
i
|
08f52043-0b0a-48e8-b6bf-3ceab6df6a2a
|
On the theoretical front, we show that TAG-join's
communication and computation complexities are competitive
with those of the best-known parallel join algorithms [1]}, [2]}, [3]}, [4]}, [5]}, [6]} while avoiding
the relation reshuffling these algorithms require (for re-sorting or re-hashing)
between individual join operations.
TAG-join adapts techniques from the best sequential
join algorithms (based on worst-case optimal bounds
[7]}, [8]}, [9]}
and on generalized hypertree decompositions [10]}, [11]}),
matching their computation complexity as well.
|
i
|
59e99244-c06c-428c-a0d3-b81127d87e5a
|
On the practical front, we note that our vertex-centric SQL evaluation scheme applies to both intra-server thread parallelism and to distributed cluster parallelism.
The focus in this work is to tune and evaluate how our approach exploits thread parallelism
in the "comfort zone" of RDBMSs:
running the benchmarks they are traditionally tuned for,
on a multi-threaded server with large RAM and SSD memory holding all
working set data in warm runs.
|
i
|
ebaf5e87-39df-4ec1-bb20-f7a5aa77c55c
|
We note that
the benefit of recent developments in both parallel and
sequential join technology has only been
shown in settings beyond the RDBMS comfort zone.
The parallel join algorithms target scenarios of clusters with numerous processors,
while engines based on worst-case optimal algorithms tend to be outperformed
Their benefit kicks in on queries where intermediate results are much larger than
the input tables. This is not the case with the primary-foreign key joins that are prevalent in OLTP and OLAP workloads since the cardinality of \(R \bowtie _{R.FK = S.PK} S\) is upper bounded by that of \(R\) (every \(R\) -tuple joins with at most one \(S\) -tuple).
by commercial RDBMSs operating in their comfort zone [1]}, [2]}, [3]}, [4]}.
|
i
|
e6f98994-8e86-48e8-ae50-210a01771c6f
|
TAG-join proves particularly well suited
to data warehousing scenarios (snowflake schemas, primary-foreign key joins).
Our experiments show competitive performance
on the TPC-H [1]} and across-the-board dominance on the TPC-DS [2]} benchmark.
|
i
|
5f36799b-c4b7-4db0-9d5c-175a03e198a5
|
A bonus of our approach is its applicability on top of
vertex-centric platforms without having to change their internals.
There are many exemplars
in circulation, including open-source [1]}, [2]}, [3]}, [4]}
and commercial [5]}, [6]}.
We chose the free version of the TigerGraph engine [6]}, [8]} for our evaluation
due to its high performance.
|
i
|
c06c3fd5-ebea-4b27-80b7-e4275703d9c2
|
Our work uncovers a synergistic coupling between the TAG representation of relational databases and vertex-centric parallelism that went undiscovered so far because,
despite abundant prior work on querying graphs on native
relational backends [1]}, [2]}, [3]}, [4]},
there were no attempts to query relations on native graph backends.
|
i
|
0fd6fb94-ef1d-4167-8b21-db5050917426
|
Our vertex-centric SQL evaluation scheme applies to both intra-server thread-based and to distributed cluster parallelism.
The bulk of our experiments (Sections REF , REF ,
REF , REF )
evaluates how our approach enables thread parallelism
in the comfort zone of high-end RDBMSs: running the benchmarks they are traditionally
tuned for, on a multi-threaded server with large RAM and SSD memory holding
all working set data in warm runs.
We also carry out preliminary experiments evaluating the ability to exploit
parallelism in a distributed cluster, where we compare our approach against
Spark SQL (Section REF ).
We detail our performance comparisons below but first we summarize the experimental results.
|
m
|
60f38b0e-38a5-4512-94ce-b764074f74ab
|
Figure REF shows the aggregate runtimes (i.e. summed over all queries)
for the TPC-H and TPC-DS query workloads in single-server mode.
For each benchmark we performed three sets of experiments with varying data sizes.
In aggregate, the TAG-join approach outperforms all relational systems (5x-30x speedup) on TPC-DS queries.
On TPC-H queries, it is much faster than PostgreSQL and Spark SQL
and competitive with all others except for
RDBMS-X
IM (whose speedup does not exceed 1.6x).
As the drill-down into our measurements shows, the TAG-join excels particularly
at computing local-aggregation queries and, regardless of aggregation style,
PK-FK join queries and queries with selective joins,
outperforming even RDBMS-X
IM on these query classes.
|
m
|
f43b2d0b-7722-48d2-af17-bef5c7c8c878
|
The queries are evaluated on datasets obtained with the benchmark generators,
at scale factors 30 (30GB), 50 (50GB) and 75 (75GB).
We stopped at SF-75 to make sure that the input database can be cached without approaching the main memory limit of the machine.
Each dataset is supplied with primary and foreign key indexes. Each query is executed 11 times, the first run to warm up the cache and the remaining 10 runs to compute the average runtime. We impose a timeout of 30 minutes per execution.
<FIGURE>
|
m
|
e5ac69f9-d18a-45b7-a00b-678ef5661c32
|
We measured memory usage during workload execution with warm caches at a one second interval, and then reported a peak usage.
We read information from \(/proc\) file system, which stores information about all processes currently running, including their memory usage.
|
m
|
536e2977-57fa-4579-82de-0c0b1ef6cfb1
|
With automatic memory management enabled, RDBMS-X
allocates shared memory area through in-memory file system. The shared memory area includes buffer pool and in-memory column store.
PostgreSQL's buffer pool (\(shared\_buffers\) ) is part of shared memory area as well.
Thus, in order to capture the full memory usage results for RDBMS-X
and PostgreSQL we need take into account the amount of shared memory (i.e. buffer pool) that is used during query execution.
|
m
|
41aa7776-d2ff-45ec-aefb-d4d27e1702a0
|
Results are summarized in Table REF for TPC-H and TPC-DS queries.
For RDBMS-Y
the numbers are shown only for clustered PK,
being the same for non-clustered PK.
We only show results for SF-75, but the numbers are proportional for SF-30 and SF-50.
Notice that TAG_tg' memory performance is similar to RDBMS-Y
and RDBMS-X
row store.
RDBMS-X
IM does better, but not by a game-changing margin:
9.4% on TPC-DS (where TAG_tg is faster though) and a negligible 2.6% on TPC-H.
|
r
|
1e379a24-18d8-450f-9704-f4a29363a78d
|
Figure REF shows the aggregate runtimes of TPC-H and TPC-DS queries. On TPC-H queries TAG-join in aggregate 2x faster than Spark SQL, and 1.46x faster on TPC-DS queries. Similarly to the centralized setting (single machine) experiments, the best performance is observed on queries without aggregation, with local aggregation and correlated subqueries.
|
r
|
83b3daac-c556-4f7c-ac12-696930c183f1
|
On TPC-H queries, TAG-join is faster than Spark SQL on 17 queries and competitive on 3 queries (out of total 22 queries). For example, on LA queries such as q3, q4, q5 and q10 the speedup ranges from 1.6x to 4x. The biggest speedup of 17x over Spark SQL is observed on q17, which contains a correlated a subquery.
Most queries with GA or scalar GA perform well using TAG-join
except for q6 and q13 where Spark SQL is faster by 1.4-2.5x. Individual runtimes of all TPC-H queries are shown in Table REF .
|
r
|
5e83138a-7daf-41f8-8c13-69f3ccb0a262
|
Out of 84 TPC-DS queries, TAG-join is either competitive or outperforms Spark SQL on 64 queries.
On queries without aggregation the speedup is 3.4-5.5x, while on queries with LA TAG-join achieves up to 7.6x speedup.
TAG-join performs well on most of the queries with either GA or single GA.
Spark SQL is only faster on 20 queries, where either GA or single GA is computed. We observed the same in a single machine setting. In order to compute the final result all active vertices need to write into a single global accumulator, and with a lot of active vertices, this can significantly degrade the performance, since no parallelism is gained.
Individual runtimes of all TPC-DS queries are shown in Table REF .
|
r
|
1ac0702d-0361-4d03-95bd-93b54b8be82c
|
We track network usage during query execution on each machine in the cluster using \(sar\) tool, and record the total number of bytes received and transmitted during execution of all queries for each benchmark.
Figure REF shows the total incoming traffic, i.e. incoming traffic summed over all machines in the cluster. We only report incoming traffic as it coincides with the total outgoing traffic.
Spark SQL incurs 9x more traffic on TPC-H benchmark and 4x more traffic on TPC-DS benchmark. Spark uses broadcast join or shuffle join, which requires replication of data over many partitions, thus more network traffic.
|
r
|
948e7223-77fc-4a5c-87ef-5884e89b7233
|
We have shown that the TAG encoding and our TAG-join algorithm
combine to unlock the potential of vertex-centric SQL evaluation to exploit
both intra- and inter-machine parallelism.
By running full TPC SQL queries we have proven that our vertex-centric approach is
compatible with executing RA operations beyond joins.
The observed performance constitutes very promising evidence for the relevance of vertex-centric approaches to SQL evaluation.
|
d
|
1f1d1603-a499-4311-b981-a71173deb2ce
|
From the SQL user’s perspective, our experiments show that in single-server data warehousing settings, vertex-centric evaluation can clearly outperform even leading
commercial engines like RDBMS-X
IM. In TPC-H workloads, comparison to RDBMS-X
IM
depends on the kind of aggregation performed, while our approach is competitive with
or superior to the other relational engines. In a distributed cluster,
our TAG-join implementation outperforms Spark SQL on both TPC benchmarks.
|
d
|
05732635-9083-4944-8281-2af3d2dfd198
|
Our main focus has been on join evaluation and we have only scratched the surface of inter-operator optimizations,
confining ourselves to those inspired by the relational setting
(like pushing selection, projection and aggregations before joins).
We plan to explore optimizations specific
to the vertex-centric model.
|
d
|
b810bbc3-d981-4607-89af-a5cf64d34905
|
If the value domain is continuous and the database is constantly being updated,
the TAG encoding would prescribe creating a new attribute vertex for virtually
each incoming value, which is impractical. Applying our vertex-centric paradigm to this
scenario is an open problem which constitutes an appealing avenue for future work.
|
d
|
29c7523a-db78-4582-8866-f82d861d263a
|
Reinforcement Learning (RL) [1]} is a powerful optimisation method used for complex problems. In RL, an agent learns to perform a (set of) task(s) on the basis of how it has performed on the previous steps. The agent typically gets a reward for moving closer to the goal or the optimised value, and in some cases a punishment for deviating from its intended learning task. Reinforcement learning, in a lot of ways, is inspired by biological learning that generally happens in mammals; for e.g.: children learn a language by observing their environment and if they are able to mimic it well, they are rewarded with something in appreciation. Similar behaviour is also observed in animals, like dogs, which are given treats on successful completion of a task, say fetching a stick. Mammalian brain then tries and rewire itself so that it can perform certain actions which lead to successful completion of tasks and gives it some sort of short/long term rewards.
|
i
|
fc093770-21e8-48a6-936e-46e5ec57f736
|
A few algorithms in RL are also directly inspired by neuroscience and behavioural psychology. Temporal Difference learning (TD), for example, is one such example which has been rigorously studied in the mammalian brain, and the loss function in TD is known to perform in a way the brain spikes, in case of dopamine neurons, when given a reward [1]} [2]} [3]} [4]} [5]}. Schultz et al. [6]} reported that in case of a monkey rewarded with juice, the dopamine level shot up, when the reward was not expected, showing a difference in expected and actual rewards, as in the TD loss function. Overtime, this firing activity back propagated to the earliest reliable stimuli, and once the monkey was fully trained, the firing activity disappeared, and started resulting in a decline on when the expected reward was not produced. The field has not just benefitted unidirectionally, results from TD have also been used in the study of schizophrenia, and the effects of dopamine manipulation on learning [7]}.
|
i
|
f9d06241-ad68-42a3-b2e9-228f82b7dd90
|
The community has always been interested in development of more sophisticated algorithms and applying them to real-life tasks. RL is not far-behind in this. With the practical viability of deep leaning, there have been significant progresses in training RL algorithms with deep learning and then applying them to solve problems with human-level accuracy. This has been lately demonstrated by the use of RL algorithms to train agents for playing atari games, where they have surpassed human accuracy on a wide array of games, without significant changes in the strategy [1]}. Board games are also not left behind in this feat. Shortly after atari games, RL has been used to solve one of the most complex board games, and beat the world champion [2]}.
|
i
|
741d0ca4-fa40-40a9-ad5e-40a37f758bef
|
Traditional approaches on board games have failed on larger boards, given large state spaces, complex optimal value functions making it infeasible for the to learn using self-play, and underdeveloped algorithms, that have slower/skewed learning curves. The problem starts to become even more complex as soon as multi-agent strategies come into play. Cooperative v/s independent multi agent strategies have their own advantages and disadvantages [1]}. Cooperative agents have shown improvements in scores at the cost of speed. Independent agents, on the other hand, have shown the reverse to be true. While the idea of having both agents have access to the other’s state and action space is an acceptable workaround, such a setting is not possible to have in a few cases, like card games and certain kind of board games.
|
i
|
99bce7f1-fe89-4dfd-b042-25980b5275d6
|
In our paper we demonstrate the use of popular RL algorithms in playing the Royal Game of Ur. The use of RL algorithms to solve popular board games is not new [1]} [2]} [3]} [4]} [5]}, but the use of RL for solving Ur has not been attempted yet. The Game of Ur is known to be a fairly complex, two-player board game, with variable number of states and pawns. A complete background on Ur is given in section REF . We have compared the performance of on-policy Monte Carlo, Q-learning and Expected Sarsa on playing The Game of Ur, in an independent multi-agent setting. We compare these RL algorithms’ application to Ur, implemented in a simulator, similar to Open AI’s Gym [6]}. For the implementation, we create our own simulator from scratch (see section REF , with similar functions as implemented in [6]}. Our goal is to test the performance of these algorithms on the simulator, and for the agents to be able to achieve human level strategies. The algorithm is not provided any game-specific information or hand-designed features, and are not privy to the internal state of the simulator. Through the simulator, the algorithms are just given the state space and the possible actions to take, as well as the information that they already have from their previous actions.
|
i
|
245e1e06-0da3-4316-9f1d-4869b171b017
|
We formulate the problem as a multiagent MDP, where both agents compete against each other to learn [1]}. The reward function works as defined in (REF ). For our experiments, we consider only two dice, since every board position is reachable by this configuration of dice. We have considered 4 pieces per player in order to train the agent quickly using limited computational resources at hand. Another reason to consider only 4 pieces per player is that this is the minimum number of pieces required for testing a popular strategic move, represented in figure REF . Our action space consists of forward and null moves, as described in (REF ).
|
m
|
bb71af67-50c1-4433-be01-8c4dff2a9ea4
|
For our experiments, we used an \(\epsilon \) -greedy approach, with an epsilon value of \(0.1\) , which signifies that the agent explores with a probability of \(0.1\) , while takes greedy actions with a probability of \(0.9\) . We trained our agent using Q-learning, Expected Sarsa, and on-policy Monte Carlo, since we thought it would be interesting to compare the performance of agents trained using episodic learning algorithms like on-policy Monte Carlo, against the popular TD learning based algorithms like tabular Q-learning and Expected Sarsa. Our agents are trained on 100K episodes for each of the algorithm, and then tested on 100 gameplays against an agent following a stochastic policy with equiprobable actions.
|
m
|
1427d2fc-795f-489c-982d-7a3b7b005dde
|
Since the state space of our environment is very large, therefore to keep a track of how the agent learns, we record the change in value function for state ((3, (('a', 3),)), (3, (('c', 3),)), 1), given that this state would occur frequently over the training period of all agents. We keep a track of time steps to win for each agent during training to observe if the agent is learning the strategic moves to finish early.
|
m
|
dfe8874e-10a0-485f-b342-121af5789bd5
|
We also tested a special board position with 4 pieces (displayed in figure REF ), wherein we tried testing for what piece does agent decide to move based on the strategy it should learn. We show both safe and unsafe moves when the green piece enters the war zone in figures REF and REF .
|
m
|
85225a1d-fba9-412d-a18e-8ae9b7c6a66e
|
We show the results of our testing for 100 gameplays after training 3 separate agents using Monte Carlo, Q-learning and Expected Sarsa, summarized in Table REF . For our algorithms, Q-learning wins 60 out of 100 games, while Monte Carlo and Expected Sarsa win 55 and 54 games respectively. This result is not completely random, and demonstrates an agent learning to play using popular strategic moves, as described below and shown in plots.
<FIGURE>
|
r
|
6258be52-c0fc-4d71-9050-68837b3143f6
|
We demonstrate the learning of our agents using the metric time to finish, as shown in figure REF . We observe that for all 3 agents trained, our time to finish decreases over 100K episodes. The sharpest decrease is shown by Expected Sarsa, while both Q-learning and Monte Carlo show similar competing curves. The curves do show fluctuations, but trend seems to move towards stabilization.
<FIGURE><FIGURE><FIGURE><FIGURE>
|
r
|
bb12571c-5769-4ea8-b5d5-e372ac62a4d8
|
Our value functions for Monte Carlo, Q-learning and Expected Sarsa, for state \(((3, ((a, 3),)), (3, ((c, 3),)), 1)\) are shown in Figures REF , REF , and REF . The value function for the given state does seem to increase for all 3 agents. In case of Monte Carlo, it shows a sharp increase, followed by a trend towards stabilization, while the plots for Q-learning and Expected Sarsa show a much smoother trend. One should not be misled that one agent is performing better over the other, it just shows that there is a difference in the way they learn.
|
r
|
24428576-3941-4487-ac0d-283903e6eca7
|
We also demonstrate the strategic move that our agents learn, as shown in figure REF . Our agents are able to learn this strategic move, in which the piece on coordinate \((‘b’, 8)\) is moved to \((‘a’, 8)\) . This is an important move given the agent’s gameplay when at the intersection of war and safe zones.
|
r
|
be959b6b-2190-49bc-b16a-144667f1cb58
|
The testing of our agents using different methods show promising results. The outcome was not always the smoothest, but for a lot of things we cannot conclude with certainty as to why an agent behaves in the given way. We can attribute this problem to the limitation of computational resources required for training our agents, with such complex and large state spaces. We believe that our agents could perform much better when trained with more episodes and better computational resources.
<FIGURE>
|
d
|
e51339ed-ba49-4f1b-8b73-ebf4a7691b24
|
We chose to show the value function for the state \(((3, ((a, 3),)), (3, ((c, 3),)), 1)\) , given that it is a prime state and occurs very frequently (comparison of all three methods together is shown in figure REF ). The disparity in smoothness, and the difference in values of the value functions for the given state, of our plots could be attributed to the fact that Monte Carlo takes full episode to learn, while TD methods do not. The step updates in case of TD methods are biased on the initial conditions of learning parameters. The bootstrapping process typically updates a function or lookup \(Q(s,a)\) on a successor value \(Q(s^{\prime },a^{\prime })\) using whatever the current estimates are in the latter. Clearly, at the very start of learning, these estimates contain no information from any real rewards or state transitions. However, if the agent is learning as it should, then this bias will reduce asymptotically over multiple iterations, but this bias is known to cause problems, specially for off-policy methods. Monte Carlo methods, on the other hand, do not suffer from this bias, as each update is made after the entire episode, using the true sample of \(Q(s, a)\) . But, Monte Carlo methods can suffer from high variance, which means more samples are required to achieve the same degree of learning compared to TD methods. A middle ground between these two problems could be achieved by using TD(\(\lambda \) ).
|
d
|
677dc6a0-9f9e-4586-93d7-6d50068970b1
|
Our agent learns to move the piece on \((‘b’, 8)\) inside of the war zone, to coordinate \((‘a’, 8)\) inside the safe zone. We believe that this is an important strategic move that it learns, since the piece at \((‘b’, 8)\) will reach the end position in two steps; while if the opponent’s piece at position \((‘b’, 5)\) eliminates it, then the piece at \((‘b’, 8)\) would have to restart. So agent by moving piece to \((‘a’, 8)\) didn’t just move it closer to winning state but also saved it from eliminating.
|
d
|
3a51a051-1401-4562-a341-45d2e5804ec5
|
In this report, we compared the performance of 3 agents, trained using entirely different methods, namely Monte Carlo, Q-learning and Expected Sarsa, to play the ancient strategic board game, Royal Game of Ur. The state space for our game is complex and large, but our agents show promising results at playing the game and learning important strategic moves. Although it is hard to conclude that when trained with limited resources which algorithm performs better overall than the others, but Expected Sarsa showed promising results in case of fastest learning.
|
d
|
59c269cd-275a-4be5-9dba-30db85dfa8f8
|
In future, we plan to run our agent on the given algorithms and their variants for more than 1 million episodes, as is the case in community when testing on board games, as we speculate that this would allow our agent to experience more states, and therefore learn better policies. We also plan to train our agent using sophisticated deep RL methods like DQN and Double DQN, to see if the agent shows significant differences in performance when trained on those.
|
d
|
c80b9563-a13b-403e-ac4a-b2c5ed96db44
|
Rapid progress in NLP has resulted in systems obtaining apparently super-human performance on popular benchmarks such as GLUE [1]}, SQUaD [2]}, and SNLI [3]}. Dynabench [4]} proposes an alternative approach to benchmarking: a dynamic benchmark wherein a human adversary creates examples that can “fool” a state-of-the-art model but not a human language user.
The idea is that, by generating and compiling examples that fool a particular system, the community can gain a better idea of that system's actual strengths and weaknesses, as well as ideas and data for iteratively improving it.
|
i
|
09253c0e-7547-4cb2-8d9c-a24449b93044
|
There is no straightforward recipe, however, for generating successful adversarial examples.
To contribute to that knowledge base, this paper describes the strategy used by team “longhorns” in Task 1 of The First Workshop on Dynamic Adversarial Data Collection (DADC), which was on Extractive Question Answering (answering a question about a passage by pointing to a particular span of text within that passage).https://dadcworkshop.github.io/shared-task/
We focus not only on describing the details of our strategy, but also on our process for approaching the task. At the time of this paper submission, pending expert validation of the results, our team ranked first in the competition, obtaining 62% Model Error Rate (MER).
|
i
|
1ec62685-1207-43e4-bb69-2170be1b5192
|
Our approach towards creating adversarial examples was designed to be systematic, analytical, and draw on linguistically informed ideas.
We first compiled a list of linguistically inspired “attack strategies” and used it to create adversarial examples in a systematic manner.
We then analyzed some existing biases of the model-in-the-loop and its performance on a variety of different attacks.
We used this piloting phase to select the best performing attacks for the official submission.
|
i
|
10c2f49b-14c8-4962-a767-9e5bfc407eac
|
Based on the approaches that were most successful both in our pilot studies and in our official submission, we posit that the following broad areas should be of particular interest for theoretically motivated adversarial attacks on contemporary NLP systems, as evidenced by their strong performance on our target task:
|
i
|
e9c550c5-b6c3-4639-ae9b-1defba507bee
|
Taking advantage of models' strong priors.
The model was proficient at identifying the correct kind of named entity being asked for (e.g., a person for a "who" question, a place for a "where" question), but was biased to give answers which were salient (either topically or because they appeared first [1]}) or which had high lexical overlap with the question. Thus, picking a distractor with the same entity type as the target answer (e.g., another person mentioned in the text when the question was a “who" question) was often effective. This result is broadly consistent with observations that modern NLP systems can perform well in the general case but can be biased towards frequency-based priors [2]} that mean they are sometimes “right for the wrong reasons” [3]}.
Using language that is linguistically taxing for humans (and machines) to process.
Psycholinguists who study human language processing often study constructions that are grammatical but difficult for humans to process in real time, such as garden path sentences [4]}, [5]} and complex coreference resolution [6]}, [7]}.
We found that the model was indeed often fooled by questions that included these types of constructions.
While we did not collect any human data, the sentences that fooled the model are likely to be hard for humans as measured by tests of real-time processing difficulty (e.g., eye tracking, self-paced reading), even though humans would be able to successfully process these sentences given enough time.
Tapping into domain-general, non-linguistic reasoning.
We found that asking questions which do not require mere linguistic processing but require other kinds of reasoning (e.g., numerical reasoning, temporal reasoning, common-sense reasoning, list manipulation) were hard for the model.
This result is consistent with prior work showing that language models struggle with these kinds of reasoning tasks [8]}, [9]}, [10]} and may be more generally explained by evidence from cognitive science that these kinds of reasoning tap into cognitive processes that are distinct from linguistic processing [11]}, [12]}.
|
i
|
d7120a5b-f037-466a-b018-3a160752108c
|
Because these strategies and this general approach are broad and theoretically motivated, we believe that our methods could be used to generate adversarial examples on other Natural Language Understanding tasks besides Question Answering.
In what follows, we characterize our approach in both the pilot phase and official submission, provide our list of attack strategies,
and discuss the limitations of the task and model.
|
i
|
e945537f-d994-4b88-90c1-bf1d188ffcc2
|
A fundamental feature of language is that it is a cooperative enterprise [1]} that enables efficient communication between parties [2]}.
Therefore, in ordinary language, people typically talk about discourse-relevant entities [3]}, avoid difficult syntactic constructions [4]}, [5]}, and structure information in a way that is easy to produce and understand [6]}, [7]}.
If anything unifies all of our most successful attack strategies, it is that they eschew these principles in the context of the given task and passages.
Instead, successful attacks ask about surprising aspects of the text (e.g., by including distractors), often using complex language (e.g., garden path sentences and complex coreference resolution) and reasoning (e.g., temporal and numeric reasoning).
|
d
|
f2689d59-f716-4ae0-836f-8a4d6247b72d
|
So, in some ways, the successful attack questions are less likely to be encountered in ordinary language use [1]}, [2]}.
But another key property of human language is that it is flexible and generative, such that people can produce and understand surprising and unexpected utterances.
To that end, we think these adversarial questions are a fair target for improving systems precisely because they are linguistically unusual: human language is not just for the “average case" but can be used to express meanings that are subtle, interesting, and complicated.
|
d
|
b60629f9-dc80-4465-9e7b-14c0f5e9e37d
|
Perhaps because these questions also require humans to think creatively outside their ordinary linguistic experience, we also found that we achieved better performance when we had larger groups of people working on generating questions at once, so that there was a wider diversity of ideas.
|
d
|
a898283e-fc13-41eb-90c3-d1b4f2e831be
|
Indeed, while some questions may be less likely to appear in a “extractive question answering” dataset, they are understandable by humans and are likely to be useful for efficient communication in real-world settings. The objective behind “extractive QA” is that a machine should answer any question that a human would, given the passage. A variety of real-world tasks can be reduced to extractive QA and in many cases the “correct” passage corresponding to the question is not known a priori. Asking questions such as “Where was X at a time Y” and “What is the difference between 737-200 and 737-200C” may be less natural for a human that has access to the passage, but are questions that someone would, for example, ask their automated assistant. Therefore, a well functioning model needs to embrace the creativity and be able to correctly answer adversarial questions.
|
d
|
e6f3f8e8-231d-4f7d-ae85-2be37bff787f
|
Finally, the adversarial attacks that we present are not just interesting from the scientific point of view, but also have clear practical implications. Most of the attacks correspond to specific capacities of the model-in-the-loop such as coreference resolution, numerical and temporal reasoning.
The consistently high MER indicates that the model underperforms in tasks that require those capacities.
|
d
|
99974230-869b-43e9-a2d9-a5b7e7e2f5a6
|
Our approach towards creating adversarial examples allows us to implicitly evaluate the performance of the model and the quality of the data with respect to a wide variety of linguistic and reasoning categories. Overall, we found that the model-in-the-loop performs impressively good on the majority of question types. Only a small subset of the strategies could consistently obtain above 50% MER and these strategies did not necessarily work for all questions.
For instance, questions with relatively few possible entities matching the question type meant fewer possibilities for distractors.
|
d
|
ee675fb7-f187-4f4a-aa8b-2b564ab0bb7d
|
The performance of the model is also a function of the varying difficulty of the passages. We found the majority of the passages to be short declarative texts with a simple syntactic structure, few named entities, and low amount of information. Generating and answering questions from those passages is a rather trivial task. The selection of more complex paragraphs will likely result in a lower performance of the model and a lot more possibilities for creative and successful adversarial attacks.
|
d
|
117e2f19-ec23-406a-8056-eb0d2e4cd1c1
|
In this paper we presented the strategies used by team “longhorns” for Task 1 of DADC: generating high-quality adversarial examples. We obtain the best results in the competition by taking a systematic approach, using linguistic knowledge, and working in a collaborative environment.
|
d
|
899fb424-7cc4-4db4-816e-e70bec65a996
|
Our approach outperforms prior work in terms of model error rate and also provides a variety of insights. For instance, our pilot analysis covers a large number of linguistic and reasoning phenomena and explores different model biases. This facilitates a more in-depth analysis of the performance of the model. The systematic approach also gives us insight into the quality and difficulty of the data.
|
d
|
5b41b8eb-c9d3-4683-a108-32ec9a3c70a3
|
Our strategies for generating adversarial examples are not limited to extractive question answering. They can be adopted at larger scale to improve the quality of models and data on a variety of different tasks. We believe that our work opens new research directions with both scientific and practical implications.
|
d
|
62f0eaea-3548-4267-84bf-c0e224793172
|
The majority of deep learning algorithms output weights of neural networks: in recent years, a growing body of works has investigated algorithms which rather output probability distributions over the connection weights of a network, with a number of advantages.
This is evidenced e.g. by works
inspired by Bayesian learning (see e.g. [1]}, [2]}, [3]} among many others) or by the frequentist PAC-Bayes bounds (see e.g. [4]}, [5]}, [6]}, [7]}).
In both cases, a probability distribution over neural network weights defines what can be called a Probabilistic Neural Network (PNN).
|
i
|
2b2c0527-f499-41b0-a668-4acb11191569
|
Recently, PNNs learnt by optimising PAC-Bayes bounds have shown promising results on performance guarantees, by delivering tight risk certificates (generalisation bounds) for predictive models that are competitive compared to standard empirical risk minimisation (see [1]}).
Importantly, the PNN paradigm, coupled with PAC-Bayes bounds, is an example of self-certified learning,
which proposes to use all the available data for learning a predictor and providing a reasonably tight numerical risk bound value that certifies the predictor's performance at the population level.
In this case, the risk certificates can be evaluated on a subset of the data used for training and thus do not require a held-out test set, allowing efficient use of the available data.
These principled learning and certification strategies based on PAC-Bayes bounds deserve further study to unfold their practical properties and limitations.
|
i
|
0afd11d6-2276-4018-8f8c-83c074603e66
|
PAC-Bayes bounds (pioneered by [1]}, [2]}, [3]}) are typically composed of two key quantities: i) a term that measures the empirical performance of a so-called `posterior' distribution, and ii) a term involving the divergence of the posterior to a `prior' distribution, which in most bounds is the Kullback-Leibler (KL) divergence (we refer to [4]} for a comprehensive presentation).
Thus, when using a PAC-Bayes bound as an optimisation objective, these two terms must interact to balance fitness to data with fitness to the chosen prior.
A classical assumption underlying PAC-Bayes priors is that they must be independent from the data on which the empirical term of the PAC-Bayes bound is evaluated.
This assumption is well-known in the PAC-Bayes literature (as discussed by [5]}, [6]}).
Interestingly, however, the chosen prior greatly impacts the bound via the KL term, which often amounts to the dominating contribution to bound values (as pointed out by [7]}, [8]}).
This prominent role of the KL term has implications both for optimisation and for risk certification based on PAC-Bayes bounds: (i) the KL term effectively constrains the posterior such that it cannot move too far from the prior; (ii) the large values of the KL term when using data-independent priors or uninformed priors suggests that these priors may not be able to give tight risk certificates, therefore calling for data-dependent priors.
|
i
|
2d8b5bdb-3e01-43b4-87e2-17c6d7bfe22b
|
These considerations give clues on the importance of the prior distribution in PAC-Bayes bounds:
An arbitrarily chosen prior may mislead the optimisation, since the posterior is constrained to the prior by the KL term, while a prior representing a good solution for the problem at hand would give a better starting point.
Accordingly, some works on PAC-Bayes bounds for neural networks have considered ways to connect PAC-Bayes priors to the data.
In particular, the recent works in [1]} and [2]} explored PAC-Bayes priors learnt on a subset of the data, which does not overlap with the data used for computing the empirical term in the PAC-Bayes bound.
This way, these data-dependent PAC-Bayes priors are in line with the classical assumption underlying PAC-Bayes priors, while at the same time the priors have a more sensible connection to the data when compared to arbitrarily chosen ones.
|
i
|
24e097fc-b473-4609-9747-34ebbc8ca898
|
We study the relationship between the learnt PAC-Bayes prior predictive performance and the posterior risk certificate in a large set of experiments on MNIST.
We test if the validation of prior and posterior leads to better performance, even when the amount of data used for prior learning is effectively reduced.
We experiment with different trade-offs of the amount of data to learn the prior and certify the predictor, extending the results in [1]} to 5 additional datasets, which demonstrates the role of data-dependent priors on the tightness of the risk certificates.
We study the role of the number of parameters in the architecture in the risk certificate and KL term.
Finally, we compare several training objectives and regularisation strategies for learning the prior.
|
m
|
a5d11acd-7a0d-479a-9ad3-022edda9ac05
|
Table REF shows the datasets used (all except for MNIST available at OpenML.org), selected so as to represent a wide range of characteristics (small vs large, low vs high dimensional and binary vs multiclass). For all datasets except MNIST we select 20% of the data as test set (stratified with class label). For MNIST, we use the standard data partitions.
<TABLE><TABLE><TABLE><TABLE>
|
m
|
68108493-dfe1-4bd8-88c1-112844b60a93
|
In all experiments the models are compared under the same conditions, i.e. weight initialisation and optimiser (vanilla SGD with momentum). The mean parameters \(\mu _0\) of the prior are initialised randomly from a truncated centered Gaussian distribution with standard deviation set to \(1/\sqrt{n_\mathrm {in}}\) , where \(n_\mathrm {in}\) is the dimension of the inputs to a particular layer, truncating at \(\pm 2\) standard deviations.
All risk certificates are computed using the the PAC-Bayes-kl inequality, as explained in Section 6 of [1]}, with \(\delta =0.025\) and \(\delta ^{\prime }=0.01\) and \(m=150 000\) Monte Carlo model samples.
We also report the average 01 error of the stochastic predictor, where we randomly sample fresh model weights for each test example 10 times and compute the average 01 error.
Input data was standardised for all datasets.
|
m
|
48057443-6a08-4353-b8ec-6db4cd1213db
|
For all experiments we performed a grid search over all hyper-parameters and selected the run with the best risk certificate. We did a grid sweep over the prior distribution scale hyper-parameter (i.e. standard deviation \(\sigma _0\) ) with values in \([0.1, 0.05, 0.04, 0.03, 0.02, 0.01]\) . For SGD with momentum we performed a grid sweep over learning rate in \([1\mathrm {e}^{-3}, 5\mathrm {e}^{-3}, 1\mathrm {e}^{-2}]\) and momentum in \([0.95, 0.99]\) . We also performed a grid sweep over the learning rate and momentum used for learning the prior (testing the same values as before). The dropout rate used for learning the prior was selected from \([0.01, 0.05, 0.1, 0.2]\) .
The grid sweep is done for experiment E2. In the subsequent experiments, we use the same best performing hyper-parameters from E2.
We experiment with fully connected neural networks (FCN) with 2/3 layers (excluding the `input layer') and 100 units per hidden layer (unless specified otherwise).
ReLU activations are used in each hidden layer.
For learning the prior we ran the training for 500 epochs (except for MNIST for which we ran 100). Posterior training was run for 100 epochs. We use a training batch size of 250. PyTorch code will be released anonymously at the repository associated to this projecthttps://anonymous.4open.science/r/pacbayespriors-F355/.
|
m
|
d84cbd01-9155-478d-b90d-8163e98d27bd
|
This work empirically studies learning PAC-Bayes priors from data. Our results show that data-dependent priors lead to consistently tight risk certificates in 6 datasets for probabilistic neural classifiers and over parameterised networks, setting a stepping stone towards achieving self-certified learning. We compare a wide range of training objectives for learning the prior distribution, and show that regularisation of the prior is important and that Bayesian inspired learning objectives hold potential for learning appropriate priors, in this case learning the full prior distribution, as opposed to only the mean. Our results also demonstrate that data augmentation may be desirable during prior learning.
|
d
|
1c3bb786-1419-4fa5-8f62-1e654093ce15
|
Machine learning has received considerable attention over the past years due to its significant role for data analytics [1]}.
Under big data setting with decentralized information structure, advanced machine learning algorithms with robust and parallel implementations are needed along with the growth of data [2]}, [3]}.
Various ensemble learning frameworks, aiming to improve the generalization performance of a learning system, have been developed over the last two decades, and many interesting ideas and theoretical works, including bagging, boosting, AdaBoost and random forests
can be found in [4]}, [5]}, [6]}, [7]}, [8]}, [9]}, [10]}, [11]}, [12]}, [13]}, [14]}, [15]}, [16]}, [17]}.
Generally speaking, learning-based ensembles share some common nature in system design, such as data sampling and the output integration.
The basis of ensemble learning theory lies in a rational sampling implementation for building each base learner model, which may provide a sound predictability though learning a subset of the whole data set.
|
i
|
e71e7309-ff51-43b8-a197-302eeb86441a
|
For neural network ensembles [1]}, [2]}, [3]}, [4]}, the base models are trained by the error back-propagation (BP) algorithm and the regularizing factor used in the negative correlated cost function can be determined by the cross-validation method.
Unfortunately, BP algorithm suffers from the sensitive setting of the learning rate, local minima and very slow convergence.
Therefore, it is challenging to apply the existing ensemble methods for large-scale data sets.
To overcome this problem, we employed random vector functional-link (RVFL) networks [5]}, [6]} to develop a fast decorrelated neuro-ensemble (termed DNNE) in [7]}.
From our experience, DNNE can perform well on smaller data sets [7]}, [9]}.
However, it is quite limited for dealing with large scale data because of its high computational complexity, the scalability of numerical algorithms for the least squares solution, and hardware constraint (here mainly referring to the PC memory).
Recall that physical data may come from different types of sensors, localized information source or potential features extracted from multiple runs of some certain feature selection algorithms [10]}, [11]}, [12]}, [13]}, [14]}, [15]}.
Thus, for large-scale data analytics, it is useful and significant to develop a generalized neuro-ensemble framework with heterogeneous features.
|
i
|
a4821a70-8737-412e-99e6-ba6cc2861044
|
This paper is built on our previous work reported in [1]}, which is a specific implementation of the well-known NCL learning scheme using RVFL networks with a default scope setting of the random weights and biases.
From theoretical statements on the universal approximation property in [2]} and our empirical results on RVFL networks in [3]}, the default scope setting (i.e., [-1, 1]) for the random weights and biases cannot ensure the modelling performance at all.
Therefore, readers should be aware of this pitfall and must be careful in making use of our codehttp://homepage.cs.latrobe.edu.au/dwang/html/DNNEweb/index.html. Limits of DNNE mainly come from the following aspects:
(i) the system inputs are centralized or combined with different types of features;
and (ii) the analysed method of computing the output weights becomes infeasible for large-scale data sets, which is related to the nature of the base learner model (i.e., the number of nodes at the hidden layer must be sufficiently large to achieve sound performance).
To relax these constraints and emphasize on the fast building of neuro-ensembles with heterogeneous features, we generalize the classical NCL-based ensemble framework into a more general form, where a set of input features are feed into the SCN base models separately.
This work also provides a feasible solution by using two iterative methods for evaluating the output weights of the SCN ensemble (SCNE).
In addition, some analyses and discussions on the convergence of these iterative schemes are given through a demonstration on the correlations among the iterative solutions and the pseudo-inverse solution.
|
i
|
ce974f71-e251-44c6-803e-7f84d7f2beb6
|
The remainder of the paper is organized as follows:
Section 2 provides some technical supports, including the basics of the SCN model, a generalized version of the ensemble generalization error and the negative correlation learning scheme.
Section 3 describes the proposed SCNE with heterogeneous features, details two iterative learning algorithms and discusses their convergence.
Section 4 reports some experimental results on two large-scale data sets, including a robustness analysis on the system performance with respect to the regularizing factor used in NCL.
Section 5 concludes this paper with some remarks on further studies.
|
i
|
8aafcbf8-3d69-47eb-953c-39f0877dd756
|
As a common knowledge in machine learning, data preprocessing plays a crucial role before modelling.
Normalization and standardization are two widely used methods for rescaling data.
The 0-1 normalization could scale all numeric variables into the range \([0,1]\) , one possible formula is \(x_{new} = (x- x_{min} ) / (x_{max}-x_{min})\) .
The z-score standardization transforms the data to have zero mean and unit variance by the formula \(x_{new} = (x-\bar{x})/ \sigma \) , which indicates how many standard deviations an element is from the mean. However, if there were outliers in the dataset, normalization will certainly scale the “normal” data to a very small interval. Our experiments aim to demonstrate the capability of the SCNE for large-scale datasets, here we choose the 0-1 normalization method for data preprocessing and assume there is no outlier in the dataset. To show the data distribution, we randomly select a small batch of samples from each dataset and present them in Figs. REF and REF , respectively.
<FIGURE><FIGURE>
|
m
|
4328e054-2488-442d-a477-4f805e529e24
|
All the experiments are designed, repeated and followed the same procedure.
Fig. REF presents the general experimental diagram.
The arrows indicate the direction of the data feeds.
The experiments are designed in two stages, training and testing, indicated by the dash-lined box in the diagram.
In the training stage, the training dataset is used to build the SCNE, and the validation data is used to adjust and refine the hyper-parameters of the ensemble, such as \(\lbrace S\rbrace \) , \(M\) , \(\lambda \) , \(L_{max}\) and \(k_{max}\) (In our experiments, the heterogeneous feature set \(\lbrace S\rbrace \) and the base model number \(M\) is predefined).
When all these parameters are properly estimated, the training and validation data will be used together as one combined set to retrain the SCNE again; then this final ensemble will be tested on the testing dataset.
The testing results could be seen as a good indicator for the generalization ability of the SCNE.
<FIGURE>
|
m
|
226148a4-0d91-44f3-a875-41c5fd6298ba
|
Unlike some benchmark dataset, there is no ready-made training or testing sets of Twitter data, which leaves us various choices to partition the data.
For Twitter data, we apply \(70\%\) of the total samples for training, \(15\%\) for validation and the rest \(15\%\) for testing, denoted as “70-15-15".
The Year dataset has been specified with the testing data (almost \(10\%\) ), so we randomly split the rest \(90\%\) samples into two parts \(70\%\) for training and \(20\%\) for validation, denoted as “70-20-10".
|
m
|
72b7a5ea-86ab-412c-8fcf-586c6780b950
|
Analysing more data quickly with higher accuracy turns to be significant nowadays because of a vast number of real-world applications from various domains. Traditional machine learning techniques, such as neural networks with optimization-based learning algorithms, support vector machines and decision trees, are hardly applied for large-scale datasets. Ensemble learning with its theoretical framework helps in improving the generalization performance of the base learner models, but it still has some limitations on the efficiency and scalability for dealing with large-scale data modelling problems.
|
d
|
6da79aa2-495a-46b3-90cb-3d64807a5254
|
This paper contributes to the development of randomized neuro-ensemble with heterogeneous features, where the stochastic configuration networks are employed as base learners and the well-known negative correlation learning strategy is adopted to evaluate the output weights of the SCNE model. To overcome the challenge in computing a pseudo-inverse of a huge sized linear equation system used in the least squares method, we suggest to utilize the block Jacobi and block Gauss-Seidel iterative schemes for problem solving. Some analyses and discussions on these solutions for evaluating the output weights are given by a demonstration. Simulation results clearly indicate that it is necessary to apply the ridge regression method for building the SCN base models, so that the resulting SCNE models with the iterative schemes can be consistent with the one built by using the non-iterative method in terms of the correlationship of the output weights.
|
d
|
d4e6efdd-2927-454b-aa60-df170c4ba877
|
The reported results in the demonstration implies that the statement on the speediness of the pseudo-inverse-based solution for building randomized learner models (either for single or ensemble models) is valid only for smaller datasets. Indeed, its computational complexity and time cost are very high, even infeasible, for large-scale datasets. Experimental results with comparisons over two large-scale benchmark datasets show that the proposed SCNE always outperforms the DNNE. Robustness analysis on the modelling performance with respect to the regularizing factor used in NCL reveals that our proposed ensemble system performs robustly with good potential for large-scale data analytics.
Further researches on improved feature grouping methodology, robust large-scale data regression [1]} and enhancement of the generalization performance of the ensemble system are being expected.
|
d
|
204045b0-b3a4-47bb-8fd5-89c198e138db
|
By posing clinical problems as prediction or classification tasks, researchers can train computational models on routinely available clinical data to solve clinically relevant problems. Recently, such computational tools have been shown to perform as well or better than domain experts at several important clinical tasks (see [1]}, [2]}, [3]} among many others).
|
i
|
4b6d9ae0-be84-4bf6-bdde-ac835d23afe3
|
However, the promise of artificial intelligence (AI) and big data remains largely unrealized in healthcare. This unrealized potential has many causes, including general issues in AI as well as other issues that are more specific to health and healthcare [1]}, [2]}. The PhysioNet/Computing in Cardiology (CinC) Challenges address many of these issues by posing clear problem definitions, sharing well characterized and curated databases from diverse geographical locations, and defining evaluation metrics for algorithms that capture the importance of the algorithms in a clinical setting [3]}. Jointly hosted by PhysioNet and CinC, these annual Challenges have addressed clinically interesting questions that are unsolved or not well solved for over twenty years.
|
i
|
64ec6a5f-c7df-41b4-ad82-7f3437632b8e
|
The PhysioNet/CinC Challenge 2019, hereafter described as either the Challenge or the 2019 Challenge, asked participants to design algorithms for the early prediction of sepsis from routinely available clinical data [1]}. For the Challenge, we curated electronic medical records (EMRs) for over 60,000 ICU patients from three distinct hospital systems. These records had up to 40 clinical variables for each hour of the patient's ICU stay. We also introduced a novel, time-dependent evaluation metric to assess the clinical utility of the algorithms' predictions.
|
i
|
2c7402c1-3795-41dc-8416-99b77cd97986
|
A total of 104 teams from academia and industry submitted 853 algorithms for evaluation in the Challenge, and 90 abstracts were accepted for presentation at CinC 2019. Each team was allowed to nominate one of their algorithms for evaluation on the hidden test data, resulting in 88 algorithms for early sepsis predictionWe were unable to score 16 algorithms on the full test dataset, so we do not consider them in this article.. In this article, we focus on 70 algorithms that were most promising for further analysisWe were unable to score an additional 11 algorithms on the full training data, and we were able to score another 7 algorithms that performed no better than an inactive method that made only negative predictions on at least one of the training sets, so we do not focus on them in this article..
|
i
|
a23cebaf-932b-404e-9cf4-f6dea57e45b3
|
These algorithms represent a diversity of approaches to early sepsis prediction.
We ranked these algorithms based on their performance on the hidden test datasets using the clinically derived evaluation metric that we developed for the Challenge. However, while some algorithms necessarily performed better than others, many lower-ranked algorithms outperformed higher-ranked on certain examples.
In some cases, this specialization was the direct and desired result of feature engineering or other model design decisions, but in other cases, it was an unintended consequence of the way a model is constructed and implementedWe actively sought to preserve the diversity of the Challenge algorithms by prohibiting teams from collaborating.. Indeed, previous Challenges found that simple voting models were able to outperform individual models for the classification of electrocardiograms (ECGs) and phonocardiograms (PCGs) [1]}, [2]}, [3]}. This `wisdom of the crowd' applies more generally to computational approaches and clinical applications [4]}, [5]}, [6]}.
|
i
|
c1fbe198-3c59-42ba-83b3-02daf6cdf2ac
|
In this article, we investigate the diversity of the 2019 Challenge algorithms and describe a simple voting model for the Challenge that outperforms that individual Challenge algorithms. The voting model's performance is especially important on a completely hidden test set, which allows the assessment of the ability of models to generalize to new databases [1]}, [2]}.
|
i
|
e8d49bb6-e72e-4e0c-aac0-51ebf8f37cc1
|
We show that ensemble models can outperform individual models for early sepsis predictions. Earlier analyses demonstrated the potential of voting models for clinical classification tasks [1]}, [2]}, [3]}, and this simple approach continues to demonstrate the potential of voting models for clinical prediction tasks.
|
d
|
1232f760-6b68-4875-8522-834929defd80
|
The diversity of the approaches used for the voting model determines the potential of the voting model. If methods are highly similar, then any voting model that is defined from them is likely to be highly similar as well, limiting the potential for improvement. While we believe that there are opportunities for improvement for this simple voting model, the high concordance between the high performing methods is responsible for a large share of the modest performance improvements of this voting model over the individual models. Moreover, while the poor generalizability of the individual models on the test set from hospital system C provides an opportunity for improvement, the high agreement between models on this data further limits that opportunity. In other words, if these models generalized poorly but in different ways, then there would be more opportunity for improvement than if they generalized poorly in the same way; unfortunately, the latter was the case.
|
d
|
1ba4e816-784d-46a5-bfb5-3260e68e4e14
|
Another opportunity for diversity lies not just in the trained models but in training the models. For example, it may be beneficial to develop an end-to-end voting model that could retrain the individual models and allow them to specialize on subpopulations in the data, increasing the diversity of the resulting models. In related work [1]}, we demonstrated that clustering subpopulations, training models on each population, and then weighting models by an individual's distance (in parameter space) from each cluster, we substantially improved the performance of an algorithm. This `semi-personalized' approach to modeling could be improved by selecting independent algorithms that perform better on a particular subpopulation. Both of these approaches introduces substantially more complexity and computational demands during training, but little extra work during the forward use of the models.
Finally, we note that the parameters of the cost function that we proposed for the 2020 PhysioNet/CinC Challenge could also be optimized at the same time as the predictors.
|
d
|
b87cb988-3881-4df2-9b70-d567574d641a
|
There are a two key limitations of the analysis we have provided in this work. First, there is no way to prove definitively that test set C was comparable to training and test sets A and B. Although we matched the data in a univariate statistical manner, covariates in both space and time may be significantly different, due to subtle but important differences in the patient population and clinical practices. It may be that test set C was so significantly different (in a covariate space) that without providing data from test set C, no algorithm could be expected to generalize to that test set. (We note that some weaker algorithms did managed to do so, but those models are essentially inadequate in general.) Second, although groups were required to work without collaboration, we cannot guarantee that some groups (particularly those that were of most importance in the weighted voting), did not engage in collusion. We deem this highly unlikely, given our analysis of the submissions. However, there is a likelihood that the proliferation of standard libraries for machine learning have created significant relationships between code bases which break the independence assumption. Future work should investigate methods to better measure independence based on code structure, not algorithm outputs.
|
d
|
0153fcd7-da73-4b58-87d7-151ebae4108b
|
Ethereum is a public, blockchain-based computing platform supporting the development of decentralized applications [1]}.
The core of such applications are programs – termed smart contracts [2]} – deployed on the blockchain.
While Ethereum nodes run a low-level virtual machine (EVM [1]}), smart contracts are usually written in a high-level, contract-oriented language, most notably Solidity [4]}.
The contract code can be executed by issuing transactions to the network, which are then processed by the participating nodes.
Results of a completed transaction are provided to the issuing user, and other interested parties observing the contract, through transaction receipts. While the blockchain is publicly available for users to inspect and replay the transactions, the contracts can communicate important state changes, including intermediate changes, by emitting events [5]}.
Events usually represent a limited abstract view of the transaction execution that is relevant for the users, and they can be read off the transaction receipts.
The common expectation is that by observing the events, the user can reconstruct the relevant parts of the current state of the contracts.
Technically, events can be viewed as special triggers with arguments that are stored in the blockchain logs.
While these logs are programmatically inaccessible from contracts, the users can easily subscribe to and observe the events with the accompanying data.
For example, a token exchange application can monitor the current state of token balances by tracking transfer events in the individual token contracts.
|
i
|
3e7ebd3c-b88f-4001-a16a-5197aa8b6df8
|
Smart contracts, as any software, are also prone to bugs and errors.
In the Ethereum context, any flaws in contracts come with potentially devastating financial consequences, as demonstrated by various infamous examples [1]}.
While there has been a great interest in applying formal methods to smart contracts [1]}, [3]}, events are usually considered merely a logging mechanism that is not relevant for functional correctness.
However, since events are a central state-change notification mechanism for users of decentralized applications, it is crucial that the users are able to understand the meaning and trust the validity of the emitted events.
In this paper, we propose a source-level approach for the formal specification and verification of Solidity contracts with the primary focus on events.
Our approach provides in-code annotations to specify events in terms of the blockchain data they track, and to declare events possibly emitted by functions.
We verify that (1) whenever tracked data changes, a corresponding event is emitted, and (2) an event can only be emitted if there was indeed a change.
Furthermore, to establish the correspondence between the abstract view provided by events and the actual execution, we allow events to be annotated with predicates (conditions) that must hold before or after the data change.
We implemented the proposed approach in the open-sourcehttps://github.com/SRI-CSL/solidity/tree/merge
solc-verify [4]}, [5]} tool and demonstrated its applicability via various examples.
solc-verify is based on modular program verification, but we present our idea in a more general setting that can serve as a building block for alternative verification approaches.
|
i
|
bbf9ef16-ebc2-42ce-b990-8a460497170c
|
Solidity [1]} is a high-level, contract-oriented programming language supporting the rapid development of smart contracts for the Ethereum platform.
We briefly introduce Solidity by restricting our presentation to the aspects relevant for events.
An example contract (Registry) is shown in Figure REF .
Contracts are similar to classes in object-oriented programming.
A contract can define additional types, such as the Entry struct in the example, consisting of a Boolean flag and an integer data.
The persistent data stored on the blockchain can be defined with state variables.
The example contract declares a single variable entries, which is a mapping from addresses
to Entry structs.
Contracts can also define events including possible arguments.
The example declares two events, new_entry and updated_entry, to signal a new or an updated entry, respectively.
Both events take the address and the new value for the data as their arguments.
Finally, functions are defined that can be called as transactions to act on the contract state.
The example defines two functions: add and update.
The add function first checks with a require that the data corresponding to the caller address (msg.sender) is not yet set.
If the condition of require does not hold, the transaction is reverted. Otherwise, the function sets the data and the flag, and emits the new_entry event.
The update function is similar to add, with the exception that the data must already be set, and the new value should be larger than the old one (for illustrative purposes).
|
i
|
398d136c-425e-494f-a105-861bfba1f4af
|
Note that Solidity puts no restrictions on the emitted events, and a faulty (or malicious) contract could both emit events that do not correspond to state changes or miss triggering an event on some change [1]}, potentially misleading users. In the case of the Registry contract, the events are emitted correctly, and the user can reproduce the changes in entries by relying solely on the emitted events and their arguments.
<FIGURE>
|
i
|
8e7da30f-3de7-41e9-84a5-cb28b3fc7129
|
solc-verify [1]} is a source-level verification tool for checking functional correctness of smart contracts.
solc-verify takes contracts written in Solidity and provides various in-code annotations to specify functional behavior (e.g., pre- and postconditions, invariants).
solc-verify translates the annotated contracts to the Boogie Intermediate Verification Language (IVL) and uses the Boogie verifier [2]} to perform modular verification by discharging verification conditions to SMT solvers.
This paper presents extensions to the specification and translation capabilities of solc-verify that enable reasoning about Solidity events.
We propose event-specific annotations (Section ) and use them to instrument the code during translation with additional conditions to be verified (Section ).
|
i
|
93e60043-d680-4c60-b2f2-61ba426437e2
|
One potential limitation of our approach is that we consider loop boundaries after-checkpoints: some contracts change the data many times in the loop but only emit a single summarizing event after the loop.
This limitation could be alleviated with annotations to “allow delaying” the emit after the loop, but we do not support this as it leads to more complex specification and verification.
|
d
|
7b1dabcb-12d4-44c7-83d8-cd5d9c0cb665
|
Our approach is not tied to Boogie or modular verification.
The instrumentation can be performed on the Solidity level, and the correctness of the specification is reduced to checking assertions at particular points in the code.
This means that the instrumented code can be fed into any Solidity verifier that can check for assertion failures.
The event specifications are deemed correct if and only if there are no related assertion failures.
|
d
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.