_id
stringlengths 36
36
| text
stringlengths 200
328k
| label
stringclasses 5
values |
---|---|---|
43674e96-1594-4c70-9106-7f621a784415 | We propose data separability, also known as the minimum (relative) distance between (the representation of) two data points, as a new criterion to investigate and understand the trade-off between data utility and hardness of data recovery. Recent theoretical studies show that if data points are separable in the hidden embedding space of a DNN model, it is helpful for the model to achieve good classification accuracy [1]}. However, larger separability is also easier to recover inputs. Conversely, if the embeddings are non-separable or sometimes overlap with one another, it is challenging to recover inputs. Nevertheless, the model may not be able to learn to achieve good performance.
Two main questions arise. First, is there an effective way to adjust the separability of data representations?
Second, are there “sweet spots” that make the data representations difficult for inversion attacks while achieving good accuracy?
| i |
4eed4309-c15a-49f5-b667-b9dafafcedbf | This paper aims to answer these two questions by learning a feature extractor that can adjust the separability of data representations embedded by a few neural network layers. Specifically, we propose to add a self-supervised learning-based novel regularization term to the standard loss function during training.
We conduct experiments on both synthetic and benchmark datasets to demonstrate that with specific parameters, such a learned neural network is indeed difficult to recover input data while maintaining data utility.
| i |
e6a7302e-5c94-4d87-b623-2d2b0b4e1217 |
To the best of our knowledge, this is the first proposal to investigate the trade-off between data utility and data recoverability from the angle of data representation separability;
We propose a simple yet effective loss term, Consistency Loss – \(\mathsf {MixCon}\) for adjusting data separability;
We provide the theoretical-guided insights of our method, including a new exponential lower bound on approximately solving the network inversion problem, based on the Exponential Time Hypothesis (\(\mathsf {ETH}\) ); and
We report experimental results comparing accuracy and data inversion results with/without incorporating \(\mathsf {MixCon}\) . We show \(\mathsf {MixCon}\) with suitable parameters makes data recovery difficult while preserving high data utility.
| i |
c06130d5-f2b8-4fe0-842a-e5fcfd3ed06a | The rest of the paper is organized as follow. We formalize our problem in Section . In Section , we present our theoretical insight and introduce the consistency loss. We demonstrate the experiment results in Section .
We defer the technical proof and experiment details to Appendix.
| i |
fc60fe5b-1a7f-4f55-a58e-7900afb1ed86 | To answer Q1, we visualize the change of data representations at initial and ending epochs in Figure REF . First, in vanilla training (Figure REF a-b), data are dispersively distributed and enlarge their distance after training. The obvious difference for \(\mathsf {MixCon}\) training (Figure REF c-f) is that data representations become more and more gathering through training. Second, we direct the data utility results of Vanilla and two “default” \(\mathsf {MixCon}\) settings – (\(\lambda = 0.1, \beta =0.01\) ) and (\(\lambda = 0.1, \beta =0\) ) to Table REF . When \(\beta =0\) , \(\mathsf {MixCon}\) achieves chance accuracy only as it encodes all the \(h(x)\) to hidden space (0,0) (Figure REF f). While having \(\beta >0\) balancing the separability, \(\mathsf {MixCon}\) achieves similar accuracy as Vanilla.
<TABLE> | r |
134bc6e8-b919-4cbf-9435-17fcddcd2638 | Based on Theorem REF , we further present two strategies to ensure reasonable accuracy while comprise of reducing data separability by increasing the depth or the width of the layers \(g(z)\) , the network after the layer that is applied \({\cal L}_{\mathrm {mixcon}}\) .
In practice, we add two more fully-connected layers with 100 neurons after the 3nd layer for “deeper” \(g(x)\) , and change the number of neurons on the 3nd layer to 2048 for “wider” \(g(x)\) . We show the utility results in Table REF . Using deeper or wider \(g(z)\) , \(\mathsf {MixCon}\) (\(\lambda = 0.1, \beta =0.01\) ) improves accuracy. Whereas \(\mathsf {MixCon}\) (\(\lambda = 0.1, \beta =0\) ) fails, because zero data separability is not learnable no matter how \(g(z)\) changes. This gives conformable answer that \(\beta \) is an important factor to guarantee neural network to be trainable.
<TABLE> | r |
06cd9085-1274-4624-9656-880741f101d1 | To answer Q2, we evaluate the quality of data recovery using the inversion model. We use both mean-square error (MSE) and mean-cosine similarity (MCS) of \(x\) and \(x^*\) to evaluate the data recovery accuracy. We show the quantitative inversion results in Table REF . Higher MSE or lower MCS indicates a worse inversion.
Apparently, data representation from \(\mathsf {MixCon}\) trained network is more difficult to recover compared to Vanilla strategy.
| r |
0634ea83-f0b5-4d95-92a6-212ba5b41fba | To answer Q3, we plot the complementary effects of \(\lambda \) and \(\beta \) in Figure REF .
Note that \(\beta \) bounds the minimal pairwise of data representations, and \(\lambda \) indicate the penalty power on data separability given by \(\mathsf {MixCon}\) .
Namely, a larger \(\lambda \) brings stronger penalty of \(\mathsf {MixCon}\) , which enhances the regularization of data separability and results in lower accuracy. Meanwhile, with a small \(\beta \) , \(\lambda \) is not necessary to be very large, as smaller \(\beta \) leads to a smaller bound of data separability, thus resulting in lower accuracy. Hence, \(\lambda \) and \(\beta \) work together to adjust the separability of hidden data representations, which can affect on data utility.
| r |
fd318f84-2e47-41d9-9d29-36e866fdeeeb | To answer Q4, we evaluate the quality of inversion qualitatively and quantitatively through a model inversion attack defined in “Test setup” paragraph. Specifically, for each private input \(x\) , we execute the inversion attack on \(h_\text{mixcon}(x)\) and \(h_\text{vanilla}(x)\) of testing images. As it is qualitatively shown in Figure REF , first, the recovered images using model inversion from \(\mathsf {MixCon}\) training (such as given \((\lambda , \beta )\) \(\in \) \(\lbrace (1, 1\times 10^{-7})\) , \((10, 1\times 10^{-2})\) , \((100, 1\times 10^{-2}) \rbrace \) ) are visually different from the original inputs, while the recovered images from Vanilla training still look similar to the originals. Second, with the same \(\lambda \) (Figure REF column c3-c5), the smaller the \(\beta \) it is, the less similar of the recovered images to original images. Last, with the same \(\beta \) (Figure REF column c3 and c6-c8), the larger the \(\lambda \) it is, the less similar of the recovered images to original images.
<TABLE> | r |
71bb2b82-1e2d-449f-af84-298ca699df91 | Further, we quantitatively measure the inversion performance by reporting the averaged similarity between 100 pairs of recovered images by the inversion model and their original samples. We select \((\lambda ,~\beta )\) to match the accuracy results of \(\mathsf {MixCon}\) to be as good as Vanilla training (see Accuracy in Table REF ), and investigate if \(\mathsf {MixCon}\) makes the inversion attack harder. The inverted results (see SSIM and PSIM in Table REF ) are reported in the format of mean \(\pm \) std and the worst case (the best-recovered data) similarity in parentheses for each metric. Both qualitative and quantitative results agree with our hypothesis that 1) adding \({\cal L}_{\mathrm {mixcon}}\) in network training can reduce the mean pairwise distance (separability) of data hidden representations; and 2) maller separability make it more difficult to invert original inputs. Thus by visiting through possible \((\lambda ,~\beta )\) , we are able to find a spot, where data utility is reasonable but harder for data recovery, such as \((\lambda =100,~\beta =1e-2)\) for MNIST (Figure REF ).
| r |
d33955ad-cda6-43fa-ae04-42a9a438abca | In this paper, we have proposed and studied the trade-off between data utility and data recovery from the angle of the separability of hidden data representations in deep neural network. We propose using \(\mathsf {MixCon}\) , a consistency loss term, as an effective way to adjust the data separability. Our proposal is inspired by theoretical data separability results and a new exponential lower bound on approximately solving the network inversion problem, based on the Exponential Time Hypothesis (\(\mathsf {ETH}\) ).
| d |
e4a96af8-69d1-4557-aae9-0185558fcadc | We conduct two sets of experiments, using synthetic and benchmark datasets, to show the effect of adjusting data separability on accuracy and data recovery. Our theoretical insights help explain our key experimental findings: \(\mathsf {MixCon}\) can effectively adjust the separability of hidden data representations, and one can find “sweet-spot” parameters for \(\mathsf {MixCon}\) to make it difficult to recover data while maintaining data utility. Our experiments are limited to small benchmark datasets in the domain of image classifications. It will be helpful to conduct experiments using large datasets in multiple domains to further the study of the potential of adjusting data separability of data representations to trade-off between data utility and data recovery.
| d |
0b1998e5-9f4d-470f-8ab6-49000fa05251 | Federated learning (FL) refers to the paradigm of learning a common objective collaboratively with the help of many clients (e.g. mobile devices or data centers) under coordination of a central server. Such decentralized paradigms has recently drawn significant attention in the context of machine learning and deep learning as they provide several advantages such as scalability to larger datasets, data locality, ownership and privacy, compared to traditional, centralized learning approaches ([1]}). In this context, a trusted server aggregates parameters optimized in a decentralized fashion by multiple clients. The resulting model is then distributed back to all clients, ultimately converging to a joint representative model without explicitly having to share the data ([2]}).
| i |
1ea4e259-50bd-4212-898b-e57fb8355c1a | Google has pioneered cross-device FL where the emphasis is on edge device applications ([1]}, [2]}). For instance, Google makes use of FL in the Gboard mobile keyboards, in features on Pixel phones ([1]}), and in Android Messages ([4]}). Now, cross-device FL and federated data analysis are being widely applied in electronic devices, such as cross-device FL in iOS 13, and “Hey Siri” ([5]}), etc. Nonetheless, much research has been done recently to address challenges associated with FL, including statistical challenges, the communication cost of sending large scale matrices of parameters of deep networks ([6]}), computing constraints, and personalization ([7]}).
| i |
8c395e07-4c0e-4207-ae90-ec220f18eb5b | FL faces statistical heterogeneity due to fact that the distribution of the data across the clients is inherently non-IID ([1]}). In practice, it is an unrealistic assumption that the local data on each client is always IID ([2]}). The original goal of FL, training a single global model on the union of client datasets, becomes harder with non-IID data ([3]}). To address the non-IID challenge, ([2]})
demonstrated that FedAvg can work with certain non-IID data. Other studies put efforts to alleviate the statistical heterogeneity via performing personalization in FL ([5]}, [6]}, [7]}). Communication cost as another major bottleneck of FL were studied in the literature. In particular, ([8]}) proposed an aggregation method for FL, allowing a central server to perform computation of high-dimensional data from mobile devices. ([9]}) suggested structured and sketched updates which reduce communication cost by two orders of magnitude.
| i |
c71de4c5-e658-4184-b18f-a28ed582a674 | In FL, the edge devices carry most of the load of computation and a central server updates the model parameters using the descending directions returned by the edge devices. However, FL has three unique characteristics that makes it different from the parallel optimization systems in the following aspects:
| w |
4dd10f92-1bea-4ce8-ac48-b4c7f59b978e | Statistical Heterogeneity
The main motivation for clients to take part in FL is to improve their own model performance. Especially, the clients who have very limited private data benefits the most from collaboratively learned models. However, for some of the clients who have enough private data, there is not much benefit to participate in FL. This issue becomes even worse in case of statistical heterogeneity of the clients. In fact, due to the non-IID distribution of data across devices, it leads to scenarios where some participants may gain no benefit by participating in FL since the global shared model is less accurate than the local models that they can train on their own ( [1]}, [2]}).
| w |
05115082-7b7c-4f02-9840-56c11d631ba7 | Communication Efficiency
A naive implementation of FL framework entails each client sends a full
model update back to the central server in each communication round. For large neural networks, this step will be the bottleneck of FL due to the asymmetric nature of internet connection speeds: the uplink is typically much slower than the downlink. The US average internet speed was 55Mbps download vs. 18.9Mbps upload, and with some internet service providers, this issue is worse, e.g., Xfinity provides 125Mbps down vs. 15Mbps up ([1]}). Hence, it is important to propose methods that necessitate less uplink communication cost ([2]}, [3]}, [4]}, [5]}, [6]}, [7]}). For instance, some existing model can reduce the communication cost with structured updates, and sketched updates ([8]}). Others do so by compressing the gradients ([5]}).
| w |
0d52072f-3ec3-4151-9830-e1cfc146fa28 | Personalizations and Accuracy for Individual Clients
FL is explicitly designed for non-IID clients, but most of prior works measured global accuracy and not accuracy for individual clients. A global model can perform well on personalized predictions if the client’s context and personal data is nicely featurized and embodied in the dataset, which is not the case in most clients ([1]}). Most techniques for personalization either affect privacy or involve two separate steps where a global model is constituted collaboratively in the first step, and then the global model is personalized for each client using the client’s private data in the second step. These two steps might add extra computational overhead ([2]}, [3]}, [4]}).
| w |
0e59ad58-8c23-4786-9f3b-d9faa52ec0cd | Deep neural networks (DNN), especially deep Convolutional Neural Networks (CNN), have achieved significant success in various tasks and applications. However, the excellent performance of modern CNNs comes often at significant inference costs due to more stacked layers, and thus more learnable parameters. The usage of these high capacity networks may be largely hindered for FL scenarios where in addition to accuracy, computational efficiency and small network sizes are crucial enabling factors. For example, a ResNet-152 has more than 60 million parameters and entails more than \(20G\) float-point-operations (FLOPs) when training with an image with resolution \(224 \times 224\) ([1]}). This is unlikely to be affordable on resource constrained platforms such as embedded edge devices ([2]}).
| m |
ca85184f-b1dc-42e0-b8c3-33ba3ec198d2 | The recent strategies of ameliorating CNN efficiency mostly focus on compressing models and accelerating inference without significantly sacrificing their accuracy performance. Among adopted methods, progressive pruning appears to be an outstanding one where a deep neural net is trained, then pruned, and then fine tuned to restore performance ([1]}).
| m |
2590826e-27fa-4659-8ede-45e565357243 | We train LeNet-5 architecture on CIFAR-10/100, MNIST, and EMNIST datasets in a non-IID data distribution setting. We compare the results of our proposed algorithms against the state-of-the-art baselines. Table REF reports the results for the average accuracy across all clients, communication cost, and the FLOP and parameters reduction. We first focus on the accuracy comparison against the baselines. The accuracy is measured for both cases when the subnetworks are drawn by unstructured pruning, and hybrid pruning. For unstructured pruning, we evaluate the accuracy when \(30\%\) , \(50\%\) , and \(70\%\) of the parameters are pruned. This is while for the hybrid pruning we report the accuracy results when \(50\%\) , \(70\%\) , and \(90\%\) of the parameters are pruned.
| r |
450301f0-6ea2-4585-8c43-05939746a495 | We can clearly see the advantage of the proposed Sub-FedAvg (Hy) and Sub-FedAvg (Un) algorithms over the state-of-the-arts in improving the accuracy performance of clients. This is realized due to the fact that by iterative pruning, we let each client find its own partners (clients with similar labels) and leverage their personalized parameters to ameliorate their accuracy performance. In order to reveal how effective the proposed method is, we also compare the results versus two benchmarks, i.e., the Standalone, and traditional FedAvg. In the former, each client trains a model locally only by its own local data without federation. In the latter, all clients participate in the federation and the server averages the parameters traditionally by FedAvg. These results validate the effectiveness of our proposed algorithms for the objectives presented in the following Remarks:
<FIGURE><FIGURE> | r |
fba36855-ce96-42c4-aadd-a54e585be686 | Remark-2: In the low data regime, which is the case in most FL scenarios, where the clients are the edge devices, by structured/unstructured pruning (finding subnetworks) we get rid of the common parameters and keep the personalized ones to let the clients take the advantage of the featured data of the rest of clients with similar labels. This yields around \(20\%\) accuracy improvements versus the Standalone benchmark which further motivates federation. Compared to the traditional FedAvg benchmark, our results demonstrate around \(30\%\) improvement. As is evident from the table, in the low data non-IID settings, the traditional FedAvg performs worse than the Standalone, which means that it is not justified for clients to participate in federation under this setting.
| r |
18656a0d-f2da-4427-a3cd-6101b5076e9d | This phenomenon can be explained by the fact that depending upon the data (labels) of each client, by our pruning strategy, which prunes the common parameters and keep the personal ones, we let the clients to find their subnetwork. Through comprehensive experiments, we noticed that it is likely that the clients with label overlap share the same personal parameters. With that, we help all clients to find their corresponding partners in the federation and do the parameter averaging by Sub-FedAvg. With that in mind, the poor performance of the traditional FedAvg in non-IID environment can be understood.
| r |
0d43eed1-0216-425e-861d-381bca7bd31b | Remark-3: Our proposed model also contributes in producing a more compressed model which achieves inference speedup compared to the original model. As can be seen from the table, our proposed model highly reduced the number of FLOPs up to \(2.4 \times \) compared to the state-of-the-art. This is a crucial metric especially when the clients are the edge devices with computational limitations.
| r |
2d57c694-0495-4e6d-ae06-4e54e07e9178 | A new framework for personalized federated learning with Non-IID data distributions was proposed. The method works by iteratively pruning the parameters and channels of the neural networks which results in removing the commonly shared parameters of clients' model and keeping the personalized ones. In contrast to other approaches, the proposed framework is also efficient in terms of communication cost and FLOP count. We found that our method outperforms the state-of-the-art algorithms on CIFAR-10/100, MNIST, and EMNIST benchmarks.
| d |
e840bc44-9c3f-4b36-b9e1-ce4dd72bdb83 | Classifying opinion texts at document or sentence levels is not sufficient for applications which need to identify the opinion targets. Even if the document is about one entity, many applications need to determine the opinion about each aspect of the entity. A user may express a positive opinion towards the food in a restaurant, but he may have a negative opinion towards other aspects as the ambiance. Therefore, we need to identify the aspects and determine whether the sentiment is positive, negative or neutral towards each one. This task is called Aspect-Based Sentiment Analysis or Feature-Based opinion mining as called in the early work [1]}.
| i |
8dc8725b-c5ef-49be-a17b-f38a4209f285 | In this work, we address the problem of sentiment analysis in scholarly book reviews. Our objective is to extract the opinion expressed towards a book in all its reviews. Therefore, given a collection of book reviews, we aim at finding out the aspects of the book and the sentiment expressed towards each aspect. This seems similar to aspect-based sentiment analysis in restaurant reviews where we have a set of aspects such as (food, drinks, service, ambiance, location), each aspect involves different aspect terms or opinion target expressions i.g. Pizza, Burger in food aspect.
| i |
ef0ece49-9540-439f-9c6f-34520b8d7609 | While it is not difficult to have a list of aspects in restaurant domain, it is ambiguous what may be the aspects of a book. When one thinks about the aspects of books, he may think about the quality of book, the number of pages, the discussed topics ...etc. But it is still not obvious as in restaurant reviews where all people may consider without doubt the food and drinks as aspects or categories. In fact, one can consider two methods to determine the aspects:
| i |
15c76db8-5226-48d7-8f69-3d9b8119afa0 |
Applying unsupervised method which is capable of extracting the facets or topics such as topic modeling in which we consider each topic related to an aspect. It is not obvious how we can evaluate the quality of this method and how each topic related to an aspect.
Asking domain experts to extract the aspects of books.
| i |
890445b9-1a60-4ea3-8599-d9dd5143510b | We have chosen the second method which can be evaluated at fine level of granularity, therefore we have asked the OpenEdition editorial teamhttp://www.openedition.org/, which deals with book reviews of social and human sciences, to enumerate the potential aspects that may be found in book reviews. They have listed the following aspects:
| i |
a5bfdf1d-545c-4fca-8add-ba0d86e5d283 | We also trained a L1-regularized Logistic regression classifier implemented in LIBLINEAR. The classifier is trained on the training dataset using the previous features with the three polarities (positive, negative, and neutral) as labels.
| m |
3d7c8b49-d41e-40e8-85fb-a0ea4ca0ab6b | We have used 10 fold cross-validation for evaluating our system. The results are shown in Table 3. The last two lines demonstrate the results obtaining from applying our system on the restaurant and laptop reviews provided by SemEval-2015 [1]}.
<TABLE> | m |
6e97396d-6346-4c40-9be4-8cbd2bce2fdd | The first line represents the experiment which exploits only the terms as features, it gives accuracy score 70%. The remaining lines represent the experiments when exploiting the word and the Z score features, each line represents the same experiment but with a different Z threshold. We start by assigning 3 to Z score threshold and decrease this threshold until -1. The best result is given when using terms and Z score features with Z threshold of -0.5. The accuracy is 79% which seems fair enough when comparing with the results produced in restaurant reviews (about 75.5%).
| m |
abd35c4b-64e2-4521-9f3c-4be50576052c | Aspect-Based Sentiment Analysis consists of several sub tasks. Some studies have proposed different methods for aspect detection and sentiment polarity analysis, others have proposed joint models in order to obtain the aspect and their polarities from the same model, these last models are generally unsupervised.
| w |
abe629aa-fc68-465b-992a-f17eed354eea | The early work on opinion target detection from on-line reviews presented by [1]} used association rule mining based on Apriori algorithm [2]} to extract frequent noun phrases as product features. For polarity detection, they used two seed sets of 30 positive and negative adjectives, then WordNet has been used to find and add the synonyms of the seed words. Infrequent product features or opinion targets had been processed by finding the noun related to an opinionated word.
| w |
49faa5f7-4bb0-498e-baa6-1916edcf048f | Opinion Digger [1]} also used Apriori algorithm to extract the frequent opinion targets. kNN algorithm is applied to estimate the aspect rating scaling from 1 to 5 stands for (Excellent, Good, Average, Poor, Terrible).
| w |
a9eb116c-8aa0-43cc-bc2c-2236e65b5b25 | Supervised methods use normally Conditional Random Fields (CRF) or Hidden Markov models (HMM). [1]} applied a HMM model to extract opinion targets using the words and their part-of-speech tags in order to learn a model, then unsupervised algorithm for determining the opinion targets polarity using the nearest opinion word to the opinion target and taking into account the polarity reversal words (such as not).
| w |
208561f6-8bbc-456f-977b-62e27247f790 | A CRF model was used by [1]} with the following features: tokens, POS tags, syntactic dependency (if the opinion target has a relation with the opinionated word), word distance (the distance between the word in the closest noun phrase and the opinionated word), and opinion sentences (each token in the sentence containing an opinionated expression is labeled by this feature), the input of this method is also the opinionated expressions, they use these expressions for predicting the opinion target polarity using the dependency parsing for retrieving the pair target-expression from the training set. We also applied a CRF model with different features [2]}, [3]}.
| w |
f77b4c37-6a29-4931-9b01-8cac0a6037b0 | Unsupervised methods based on LDA (Latent Dirichlet allocation) have been proposed. [1]} used LDA to figure out the opinion targets, determined the number of topics by applying a clustering method, then they used a similar method proposed by [2]} to extract the conjunctive adjectives, but not the disjunctive due to the specificity of the domain.
| w |
2d71f179-90d2-4b0e-b45c-2dcdf3be4cc4 | [1]} proposed Joint model of Sentiment and Topic (JST) which extends the state-of-the-art topic model (LDA) by adding a sentiment layer, this model is fully unsupervised and it can detect sentiment and topic simultaneously.
| w |
18534413-f4c7-429c-b50a-f973f2cc1c74 | [1]} modeled the hierarchical relation between product aspects. They defined Sentiment Ontology Tree (SOT) to formulate the knowledge of hierarchical relationships among product attributes and tackled the problem of sentiment analysis as a hierarchical classification problem. Unsupervised hierarchical aspect Sentiment model (HASM) was proposed by [2]} to discover a hierarchical structure of aspect-based sentiments from unlabeled online reviews.
| w |
c11b66ae-5917-4983-b8f8-f14afe4fc00c | We have constructed a corpus of book reviews, segmented each review into sentences and asked three annotators to extract the opinion targets and their polarities in each sentence. We trained a CRF model for opinion target extraction and a logistic regression one for sentiment polarity. The obtaining results indicate that our systems perform as well as in restaurant reviews.
| d |
360440b9-4232-4aa7-afda-df5752717a71 | Teleoperation is a key solution for interacting with remote objects and in dangerous environments [1]}, [2]}. Many teleoperation tasks require more than two instruments for successful task completion, e.g in robotic laparoscopic surgery, three instruments are required including the laparoscopic camera. Currently these tasks are executed by a team working together. However, cooperation between two people may lower the operation efficiency due to communication errors [3]} or uncoordinated motion. Three-handed trimanipulation using SRLs controlled by one operator can be a solution to the communication issue coming from multiple operators.
| i |
04bc6cb0-2225-4f6c-9ce9-ae501ac055cd | SRLs can be controlled by hands-free interfaces using the human body as a reference including control from the head [1]}, tongue [2]}, foot [3]}, gaze [4]} and voice [5]}. Among these hands-free control strategies, the use of foot motion is an attractive source due to the foot's ability to control in multiple DoFs and its relative independence from the subject's hands and/or eyes [6]}, [7]}, [8]}. The analysis of subject's performance while using foot to control a SRL has only recently been considered.
| i |
920a05bc-2c7b-4304-aab5-fb8e02d28c8f | Abdi et al. [1]}, [2]} first conducted studies of trimanual control using the foot and demonstrated that a virtual “third hand” can be controlled by the foot with the natural hands to head towards objects. For the given reaching task, they found that users preferred to use three hands compared to only two. The capability of subjects to use two additional arms was also considered in the later studies [3]}, [4]}, in which a single subject performed bipedal tel-emanipulation of two robotic arms demonstrating a quad-manual operation. These studies were either limited to the control of a single axis motion using the SRL or still in the demonstration stage without a systematic study of the user's ability for different kinds of tasks. Particularly, the studies have not yet considered tasks which required simultaneous coordination of more than two hands.
| i |
0958f085-3ae1-4bf4-ad17-e514b34faf97 | To investigate different tasks on trimanual performance, Huang et al. [1]} compared the impact of the addition of a third hand to bimanual tasks involving the hands acting independently or having coupled motion. It was found that the addition of the foot controlled hand slowed operation time, however, when the hands acted independently of one another there was otherwise no reduced performance when the hands were uncoupled. The relative performance of trimanipulation was then compared to the use of multiple operators [2]}. Three different levels of coupling between the hands were considered, where in all cases the couple outperformed a single trimanual subject. This study, however, showed evidence of an unsaturated learning effect in trimanual condition. The authors suggested that the differences in performance between the dyad and trimanipulation were the smallest for the case in which the subject limbs were continuously coupled together.
| i |
d9bff507-b93e-4b2f-9e6e-0a0fb7a825d0 | In this paper, we extend the study of [1]} to specifically account for the effect of learning on coupled tri-manipulation. We investigated i) the learning of tri-manipulation with the addition of supernumerary limb in a three hand coordination task; ii) compared the users' performance to dyadic operation of one person performing bi-manipulation and another person using their foot, and iii) studied the effect of an unpredictable environment on the operation. A tri-manipulation teleoperation platform was built with two hand interfaces, one foot interface and a virtual reality scenario. We recruited 14 subjects for a study taking place over two days in which we asked the subjects to perform a trimanual virtual task requiring coordination of three limbs. The results show that solo trimanual performance could achieve similar level to two person's cooperation in the aspects success rate and coordination. In addition, the participants seem to have a more efficient motion in trimanual mode than when cooperating with a partner.
<FIGURE> | i |
f7e95527-c168-4ee0-a67c-0f9d9d70f3cf | This study explored the human capability to trimanipulate in a coordination task. The experiment was conducted with a platform including two hand interfaces, one foot interface and a virtual reality scenario. The subjects were in general able to successfully perform the task, where after the trimanual group's learning that took place during the practice phases, there was no clear difference was observed between the trimanual and dyadic conditions in the test phase. This suggests that trimanipulation can be performed successful by a single subject in coupled motions.
| d |
9490cbd4-860d-4405-bfa4-27d5ea76e542 | The motion characteristics of the subjects showed a similar trend where after the initial learning similar performance was observed between the dyadic and trimanual conditions in the test phase. There were however two points of difference: the trimanual condition produced more efficient motion, while the dyads were able to complete the task quicker. This appears to reflect the trade-off present between the two strategies, in which the solo subject is able completely plan all of its motion allowing for more efficient overall strategies, while the dyad benefits from parallelisation of the planning which can lead to fast planning and performance.
| d |
39f42b32-0b60-4709-a236-0d22672481ec | The rankings show a clear preference for working in a dyad and the belief that trimanual operation was inherently more difficult. This may be reflective of the dyad being more familiar and as suggested by the questionnare requiring less mental effort, or it could suggest a strong preference to the cooperation and parallelisation that the condition offers. Despite these results, it is interesting to find that the participants did not feel that the trimanual hands-foot control requires more physical effort than the dyad. Furthermore, the subjects appears to already feel they were capable of controlling the foot and had sufficient time for its operation. These results suggest that the difference in mental is specifically associated with the need to plan for the third limb. It is unclear if further learning resulting in a more clear body representation of the foot being the third hand could help make such planning more natural or if this is a suggestion of a clear difference between the conditions.
| d |
a656fdbc-0839-4082-9e33-20260675ac6a | Compared to the results of [1]}, which considered the same coupled motion task, the findings of this study suggests that with additional sessions to account for learning to perform trimanual actions, subject can prove similarly capable to a dyad. Throughout the practice phase learning was clearly observed in all metrics. For some metrics such as the success rate and motion efficiency, this learning effect appears to have hit a clear ceiling by the test phase. However, for other metrics including the time to completion and coordination, the results show that even in the test phase there is continuing evidence that learning is taking place. It is worth noting that the metrics in which the trimanual condition gives as good as or superior performance to the dyad corresponds to those that show an apparent ceiling effect. This suggests that future work should investigate the effect of additional sessions to account for any potential additional learning.
| d |
07a3b5bb-c8a3-4364-b5dc-edef7e218bd7 | While the results suggest potential for trimanual performance to outperform dyads, only an abstract version of a task requiring physical coupling has been considered. Future work will therefore focus on evaluating if such results are also possible in the context of other kinds of potential supernumerary tasks or with more realistic case studies matching the presented scenario.
| d |
e256c02e-2d58-4dac-bb4f-94842a78c995 | Although autonomous driving systems have advanced significantly in recent years, one open problem is: how human drivers finish their task perfectly without high accuracy velocity/distance/position sensors? Although a human driver only provides very rough geometry result, it overwhelms autonomous driving systems on multiple tasks such as risk detection, scene understanding, vehicle behavior prediction, lane changing, etc. For example, most human drivers can predict a potential car accident in front of them and avoid it, while that is quite a hard task for AI.
| i |
13aa0b0d-fae2-43ec-b6f2-9b2f6cd37167 | An answer for this question is, human drivers can infer the relationship among surrounding objects with enormous accuracy. In contrast, so far the application of road semantic information is too simple to predict any semantic relationships. And in a complex environment like crossroads and parking lots, these relationships is even more important than the high-accuracy geometry, as these relationships indicate the previous and future status of objects. If such relationships could be captured by a model, the safety and comfort of autonomous system could be increased significantly. And, if failure happens, that information could be helpful for finding out its purpose.
| i |
3f8bee77-a9f9-41e6-9ff8-09ee08f7f2de | In recent years, there is a great amount of research focusing on behavior detection and prediction. However, most of them are focused on trajectory prediction for pedestrians or vehicles. Only few research works mentioned such relational data. One reason for this is that trajectory prediction can directly increase the performance of self-driving tasks, while relational data, which are highly unstructured, are hard to process by state-of-art deep-learning models. Also, there are only a few datasets, like Honda Research Institute Driving Dataset (HDD)[1]} and Road Scene Graph Dataset [2]}, which was based on nuScenes dataset [3]}, to include relational data.
| i |
095c906b-ba2a-449b-b89b-8f6589cb5496 | In this paper, we propose RSG-Net (Road Scene Graph Net), graph convolutional network (GCN), designed to predict relationship information from surrounding objects' proposals. Here, “objects” refer to three main categories: vehicle, pedestrian, and obstacle. This model takes those objects' bird-view bounding box as input and proposes corresponding road scene graph, which is a topological graph in which nodes refer to objects and edges refer to the type of semantic relationship between objects. Fig. REF illustrates a sample of rich semantic relationships provided by our model, and these relationships play a vital role in driver's decision making. Furthermore, as many objects, like pedestrians or vehicles nearby, tend to move in the same mode, thus forming a group or cluster, their speed, direction, and behavior stay relatively the same. We brought in the “group” concept to keep the result simple, and user-friendly.
<FIGURE> | i |
22760774-4d13-4bbb-8c57-9494a8fbc70b |
We propose RSG-Net (Road Scene Graph Net) to predict potential semantic relationship from object proposals and outputs road scene graph for relationship prediction.
Performance evaluations of RSG-Net using our Road Scene Graph Dataset[1]}.
A benchmarking model of our Road Scene Graph Dataset to generate graphs from scratch.
| i |
0d99262c-1554-454d-9e3f-cd38f94e4745 | This paper is structured as follows: Section presents state-of-the-art works on drive scene understanding, relationship prediction and related datasets. Section describes in detail our Road Scene Graph Net method, the model and evaluation metrics, while Section presents the results of different evaluation experiments. Finally, this paper is concluded in Section .
| i |
d835e665-7fb4-4c4d-acdc-82e473cf25bd | In this paper, we proposed RSG-Net, s a graph convolutional network to model fundamental semantic relationships between vehicles, pedestrians, and other objects, and arrange them into a graph structure. Experiments conducted on Road Scene Graph Dataset indicate that our model could capture and predict semantic relational data. In future works, this road scene graph could be helpful for multiple tasks like potential risk detection, road scene captioning, etc. Besides that, the performance of our current model could be further optimized with the rapid theoretical breakthrough of graph networks.
| d |
0db09a0e-cdec-435e-9610-1a5bf3ed13c0 | Single-channel speech enhancement is often carried out in the time-frequency
domain where the signals are represented by their time-varying frequency
content. To obtain the time-frequency representation, one applies a
transformation such as the short-time Fourier transform (STFT) which has a
number of free parameters. These parameters (namely frame length, frame shift
and window function, see also sec:pre) must be chosen appropriately, e.g.
based on physical characteristics of the speech signal. However, not only the
signal should be considered, but also the algorithm which will be applied on the
time-frequency representation; the choice of STFT parameters should result in
a representation that is the most useful for the algorithm at hand
.
| i |
80251596-686c-4805-9c2e-913f7506e4d8 | In this paper, we focus expressly on the choice of frame length for speech
enhancement algorithms based on deep neural networks (DNNs), specifically
targeting phase-aware approaches. The STFT representation is complex-valued and
commonly separated into a magnitude spectrogram and phase
spectrogram. The relevance of the phase spectrogram to the speech enhancement
task has been a topic of debate. Traditionally it has been considered as having
little to no importance due to empirical studies
as well as theoretical results
. However, more recent studies have
shown that phase does carry speech-relevant information
, .
Motivated by these findings, phase-aware speech processing has been enjoying a
certain renaissance and several phase-aware methods have been
proposed, e.g.
, , , , .
| i |
23add491-1d8f-40a2-a3fe-238badfcb70b | In recent years DNNs have rapidly become the tool of choice in many fields,
including audio and speech processing. Consequently, many recent phase-aware
speech enhancement and source separation methods use a DNN to either directly
estimate the phase spectrogram
, ,
or estimate phase derivatives and reconstruct the phase from them
, .
Other DNN-based approaches include directly operating on complex spectrograms
without separating into magnitude and phase
, ,
or simply taking phase into consideration for improved magnitude estimation
.
| i |
84c0d868-cbab-41c8-8426-393540ff3053 | Some authors have taken a different path and altogether replaced the STFT-based
representation with a learned encoder-decoder mechanism, that usually results in
a real-valued representation
, , .
An interesting aspect of these learned encoder-decoder approaches is that they show very
good performance when using very short frames of about 2, even going
as short as 0.125 . This
stands in sharp contrast to STFT-based approaches which generally use frame
lengths of about 2060. Note that while learned enocder-decoder approaches have
been originally proposed for source separation, they also show good performance on the
speech enhancement task
, .
| i |
6720abfb-1842-41b5-ba47-6d55cbbc4bc5 | Following the publication of the
pioneering learned encoder-decoder Conv-TasNet model
, several authors have proposed
extensions and analyses. Among other results, it has been shown that the main
contributing factors to the performance of Conv-TasNet are the use of short
frames and time-domain loss function, not the learned encoder-decoder
, .
It has also been shown that when replacing the learned encoder with the STFT,
the optimal set of input features depends on the chosen frame length
, ;
for longer frames (2564) the magnitude spectrum works well,
while shorter frames (24) show better performance only with
the full complex spectrogram as input (in form of concatenated real and
imaginary parts). This observation is especially important, since it means that phase-aware
speech processing (with either implicit or explicit phase estimation) should
possibly employ different frame lengths than magnitude-only processing.
| i |
9200a5a9-5c8e-47a6-bc02-0f898bc0d2c7 | While the choice of loss function in phase-aware speech enhancement DNNs has
been studied with respect to perceptual measures
, we are not aware of such an analysis
regarding the choice of frame length. Previous studies unrelated to DNNs have
shown that the importance of phase to speech-related tasks varies with the
choice of STFT parameters. In particular, it has been shown that departing from
the typical frame lengths used in speech processing (corresponding to about
2040) and either using shorter frames
, or using a
window shape that effectively shortens the frame
can result in very good signal
reconstruction from the phase spectrogram alone. A similar result has been
observed for longer-than-typical frames
, . However, as
long frames yield algorithmic latencies which may be prohibitive for many
real-time speech processing devices, here we choose to focus on short frames for
practical reasons. fig:kazama, reproduced from , shows how the contribution of phase and magnitude to the intelligibility of reconstructed clean speech changes with varying frame length. One observes that as the frames become shorter, phase becomes more important while magnitude gradually loses relevance. Note, however, that these findings are
based on somewhat artificial signal reconstruction experiments on oracle data
— whether or not they also apply to actual speech enhancement, source
separation, etc. remains unclear.
<FIGURE> | i |
d9dd5b28-f877-4545-879d-65d4ab0d82af | Typical frame lengths in speech processing — around 32 —
correspond to an interval that is short enough to be considered quasi-stationary
but long enough to cover multiple fundamental periods of voiced speech (whose
fundamental period lies between 212.5)
. These considerations apply to the magnitude
spectrogram but not necessarily to the phase spectrogram. Indeed, it seems that
the irrelevance often attributed to the phase spectrogram is in part due to the
choice of frame length in experiments.
| i |
0e6edb64-f7ac-4b46-acbf-69b5e3143bdf | Based on those previous results and observations, the question we seek to answer
in this paper is: Which frame length is the most beneficial for a phase-aware
STFT-based speech enhancement DNN? Using an example DNN with explicit phase
estimation, we analyze and compare the performance under different frame
lengths. In order to gain further insight, we also attempt to characterize the
relative contribution of the magnitude and phase spectrograms at each frame
length and show that the aforementioned observations on the importance of phase
in short frames are also relevant and useful in the context of speech
enhancement.
<FIGURE><FIGURE><FIGURE> | i |
f1967998-7e3b-46f9-9ae4-7512d5d556fc | The main experiment we conduct is a comparison of the model's performance as a
function of the STFT frame length \(M\) , in terms of perceptual and
objective measures. Since the model we consider includes explicit estimation of
phase and magnitude, we are able to also analyze and quantify the relative
contribution of magnitude and phase estimation, again as a function of frame
length. This analysis is conducted in a manner comparable with the perceptual
experiments in
, , ,
although here we use estimates of the clean magnitude and phase, rather than the
clean or noisy signals. For each frame length, we produce three estimates of the
clean speech signal: The actual output of the network as well as two synthetic
signals composed of the estimated magnitude and noisy phase or vice-versa:
\(\hat{s} &= \mathrm {iSTFT}\lbrace {\widehat{S}} \mathrm {e}^{\mathrm {j}\widehat{\phi }_{S}}\rbrace \,,\\\widehat{s}_{\mathrm {mag}} &= \mathrm {iSTFT}\lbrace {\widehat{S}} \mathrm {e}^{\mathrm {j}\phi _{X}}\rbrace \,,\\\widehat{s}_{\mathrm {ph}} &= \mathrm {iSTFT}\lbrace {X} \mathrm {e}^{\mathrm {j}\widehat{\phi }_{S}}\rbrace \,.\)
| m |
3af7a919-659f-4ccc-9365-d5f08db49ee3 | To allow for a fair comparison we must keep the number of DNN parameters
constant. In the case of the network architecture we consider, the number of
parameters depends on the number of frequency bins \(K\) . Hence, we
zero-pad the frames prior to applying the DFT, resulting in a constant number of
bins \(K= 257\) , which corresponds to the longest frames we consider
(\(M_t= {32}{}\) ) at \(f_s= {16}{}\) . In all
experiments we use a square-root Hann window with an overlap ratio
\(R= \frac{1}{2}\) . The same window is used for the forward and
inverse STFT.
| m |
5d5f1122-deab-481a-9858-644b7a4c6d97 | In this work we have presented a study on the effect of frame length in
STFT-based phase-aware speech enhancement with DNNs. Our results indicate a
significant gain in performance by using relatively short frames
(4), as opposed to longer frames which are typically used in
STFT-based processing. Moreover, by explicitly estimating phase and
magnitude we are able to show that this performance boost is related
to the individual contributions of magnitude and phase estimation, which are
highly dependent on the frame length. This reflects previous insights from
experiments on oracle data, while showing for the first time that this phenomenon
can be harnessed for improved speech enhancement results.
| d |
cc6fe7a3-837f-450e-94d5-5b4c5adaaba3 | Steganography and steganalysis are a pair of antagonistic technologies engaged in a cat-and-mouse game fashion, where the steganography aims to covertly transmit secret messages by embedding them into inconspicuous carriers, and the steganalysis tries to expose the presence of hidden messages in suspicious carriers. What connects the above two opposite parties is message carrier, on which the embedding mechanisms of steganography and the discriminative features of steganalysis rely heavily in practice. Commonly used carriers for steganography and steganalysis are digital media including image, video and audio, among which the digital video has gained more attention recently for following reasons.
| i |
fe755f7d-97eb-4f74-a25b-f5418025d8e7 | First, digital videos are easily accessible carriers in today’s digital media era. Digital video has become ubiquitous and pervasive due to the high popularity of video devices and the wide variety of online video applications, making video stream the main traffic flow in the IP network. According to Cisco Visual Networking Index [1]}, the global IP video traffic will account for 82% of all IP traffic by 2022, up from 75% in 2017. Second, digital video has sufficient spatial-temporal information, which provides more embedding space than other types of carriers [2]}. Third, video sharing and broadcast through social networking is conductive to conceal the steganography behavior. It is reported that 400 hours of videos are uploaded to YouTube every minute worldwide [3]}, acting as a natural camouflage for transmitting videos with hidden messages. These merits of video carrier promote the prosperity of video steganography, and large numbers of video steganographic methods or tools have emerged continuously [4]}, posing a severe challenge to video steganalysis.
| i |
92f4979d-0962-4ef2-a8b1-8704621f5275 | The video steganography can be grouped into various categories according to the used coding entities at different compression stages, such as spatial pixels [1]}, [2]}, intra prediction modes [3]}, [4]}, inter partition modes [5]}, [6]}, [7]}, motion vectors (MVs) [8]}, [9]}, [10]}, [11]}, [12]}, [13]}, [14]}, [15]}, DCT coefficients [16]}, [17]}, quantization parameters (QPs) [18]}, [19]}, entropy coding syntax [20]}, [21]} and their combinations [22]}. Among these categories, the MV based steganography has attracted great interest due to its large embedding capacity, high security and lossless message extraction [23]}.
| i |
9016634d-844a-4688-a7e9-e998b8d102ba | The MV based steganography embeds messages into MVs by modifying the values of one and/or two components of MVs. This embedding is performed in the inter coding of video compression, in accompanying with prediction error (PE) adjustment to avoid video quality degradation. The development of MV based steganography can be divided into three stages. The MV based steganography in the first stage is quality-preserved methods, which modify the candidate MVs while maintaining the video quality, such as selecting large MV magnitudes [1]}, appropriate MV phase angles [2]} and small PEs [3]}. The second-stage MV based steganography is statistics-preserved methods, whose aim is to modify the MVs with minimal impact on video statistics. Typical methods include perturbing motion estimation with suboptimal MVs [4]} and altering MVs with slight changes on MV distributions and PEs [5]}. To achieve little statistical deviation, some coding schemes, like wet paper codes [6]} and syndrome-trellis codes [7]}, are also integrated into the embedding. In the third stage, the MV based steganography can be characterized as local optimality-preserved methods from the coding point of view. These methods first assess the preservation of MV optimality when suffering MV modifications, based on which the embedding cost for each MV is designed and assigned, and finally embed the messages by using syndrome-trellis codes. Popular methods in this type measure the MV optimality using coding costs of MVs, including the sum of absolute difference (SAD) [8]} and Lagrangian cost [9]}. Recently, the combination of video statistics and local optimality is also employed for MV based steganography [10]}.
| i |
65507ebb-9628-4704-9394-faadf3fe610f | The MV based steganography has gone through a process which moves the focus from the coding-independent video entities to the intrinsic properties of video coding. Similarly, the video steganalysis in the MV domain also has such a developmental trend. Steganalytic features play a crucial role in steganalysis. Early video steganalytic features are designed drawing on the ideas of image steganalysis, such as characteristic function moments of MV residue distributions [1]}, [2]}, first-order or high order co-occurrence matrices using MV residues [3]}, [4]}, and calibration based features [5]}, [6]}, [7]}. These steganalytic features performs well against the first-stage MV based steganography, but are often incompetent for the later ones which consider more about steganographic security. Besides, they also have some constraints in feature extraction, such as continuous and non-subdivided inter macroblocks, as discussed in [8]}, [9]}. To better reveal the embedding traces in MVs, the coding optimality associated with MVs are explored to construct steganalytic features. The representative method is AoSO [10]}, which assumes that the original MV is locally optimal, and uses SAD as a criterion to identify whether or not a MV has been changed. A similar methodology was also present in [11]}, where the SAD based local optimality together with recompression calibration is used to extract features. However, these local optimality based features become ineffective when this ideology is adopt by an adversary [12]}, [13]}, [14]}. As a fightback against the local optimality-preserved methods, subsequent video steganalysis [15]} suggests using more accurate coding criterion to check the local optimality of MVs. In [15]}, the rate-distortion cost instead of SAD is utilized to form steganalytic features, achieving significant detection performance.
| i |
5ef3c281-c3eb-4f17-b01a-f1bb336e3ba9 | The local optimality based features have become a prevailing approach for video steganalysis in MV domain, owing to the fact that any locally optimal MVs undergoing changes will be shift to locally non-optimal ones in a high probability, regardless of embedding mechanism. Therefore, the local optimality based features rely heavily on the estimation of local optimality for MVs. However, the accuracy of estimation for local optimality in existing works is still far from the requirements. The SAD based local optimality [1]}, [2]} only focuses on the distortion cost, but neglects the bit-rate cost associated with MVsDistortion cost refers to the pixel difference. Bit-rate cost is the bit length required to encode the motion vector difference. See Section II for details.. Although the rate-distortion based optimality [3]} considers both the above costs simultaneously, the bit-rate cost is estimated on an unreasonable assumption. For example, when calculating the motion vector difference between the MV and its predicted motion vector (PMV), the [3]} assumes that the PMV is unchanged. If a changed PMV is used to estimate the bit-rate cost, inaccurate estimation result may occurs (see the detail examples in Section III-B).
| i |
3dc60744-3679-42e1-800b-8c126403e1ae | In this paper, we propose to construct steganalytic features for video steganalysis by using generalized local optimality. We adopt the rate-distortion cost to measure the local optimality. To identify the steganographic changes of MVs more accurately and stably, we generalize the normal local optimality in two aspects. First, The bit-rate cost (also the rate-distortion cost) is determined jointly by MV and PMV. The PMV may be changed or remain unchanged after embedding, and the uncertainty of PMV will affect the estimation of local optimality for the corresponding MV. Therefore, we suggest checking the local optimality of MVs under the conditions of various possible changes of PMVs, and generalize the local optimality from a static estimation to a dynamic one. This generalization aims to reduce the interference of the PMV changes on rate-distortion costs. Second, the PMV is a special case of MV, whose changes are also reflected in the PMV, so the change of PMV can also be served as an indicator of steganographic embedding. As a result, we suggest checking the local optimality not only for the MVs but also for the PMVs, and generalize the local optimality from MV domain to PMV domain. This generalization aims to increase the diversity of steganalytic features. Finally, we design two types of steganalytic features based on the two generalizations of local optimality, and also propose symmetrization rules to reduce feature dimension. We perform extensive experiments to evaluate the detection accuracy, robustness and complexity of our method, and all the results prove its advantages over existing steganalytic methods.
| i |
0508e42b-f8c9-403c-814f-6a2fa4910cf4 |
We discover the underlying issue in normal local optimality, and generalize the concept of local optimality in two aspects, including estimating the local optimality of MVs from the fixed PMVs to the dynamic PMVs, and extending the local optimality from the MV domain to the PMV domain.
We construct two types of steganalytic features, including the generalized local optimality features in MV domain and those in PMV domain, both of which eliminate the interactions among neighboring MVs, and thus improve the detection accuracy. To our best knowledge, this is the first study to explore the PMVs for video steganalysis.
We propose feature symmetrization rules based on the distributions of generalized local optimality, and the symmetrization rules significantly reduce the feature dimensions without sacrificing detection accuracy.
The proposed generalized local optimality increases the diversity of steganalytic features while does not obviously affect the computational complexity, indicating its efficiency for practical applications.
| i |
1de945e7-2d95-4c84-9ad2-337be6b85d3c | The organization of this paper is as follows. Section II briefly introduces the video inter coding related to our method. Section III elaborates on the two types of generalized local optimality. The subsequent Section IV describes the details of feature construction. In Section V, we present and discuss the experimental results. Finally, Section VI concludes the paper.
| i |
7adc5597-6d71-49d1-b0d8-850d46013423 | In this section, we perform extensive experiments to evaluate the effectiveness of the proposed features. We first introduce the experimental settings, and then analyze the feature subsets and validate the feature symmetrization rules. Next, we compare our features with the state-of-the-art methods, and also evaluate the feature robustness under different conditions. Finally, we discuss the computational complexity of features.
| m |
6b625361-5687-4114-8269-14f643bea0b9 | The preservation of local optimality in video coding process and the destruction of local optimality caused by steganographic embedding are a pair of contradictions, which can be used for steganalysis to discover the embedding traces. In this paper, we generalize the convention local optimality in two aspects. Specifically, we first generalize the local optimality from the static estimation using fixed predicted motion vectors (PMVs) to the dynamic estimation using variable PMVs, and then generalize the local optimality from the motion vector (MV) domain to the PMV domain. These two generalizations improve the accuracy of estimation and increase the diversity of local optimality, based on which we design two types of steganalytic features and also propose feature symmetrization rules to reduce feature dimension.
| d |
275874b0-4ecf-431f-9696-4959079a3e8d | Our proposed features have the same order of dimensions as previous works, but achieve higher detection accuracy against various MV based steganography. Besides, the proposed features are also robust to many real-world environments, such as different video sources, video prediction methods, video codecs and video resolutions. Finally, the proposed features do not improve the computational complexity of feature extraction, which can be used for practical applications.
| d |
115bbf3e-5294-4947-b8dd-75542c64f4cc | Developing intelligent agents that follow human instructions is a long-term, formidable challenge in AI [1]}.
A recent focus addressing this problem space is Vision-and-Language Navigation (VLN) [2]}, [3]}. Navigation is an ideal test bed for studying instruction-following, since the task can be simulated photo-realistically at scale and evaluation is straightforward. However, as in all instruction-following tasks, datasets that capture the linguistic diversity and idiosyncrasies of real human instructors are small and expensive to collect.
| i |
2a40152a-01da-494e-a012-76f7f74ee6a7 | Shortages of human-annotated training data for other vision-and-language tasks have been partially addressed by pretraining transformers on up to billions of image-text pairs. This has underpinned dramatic improvements in image captioning [1]}, [2]}, visual question answering [3]}, [4]}, phrase grounding [5]}, [6]}, video question answering [7]} and text-to-image synthesis [8]}, [9]}. However, these are all static image or video tasks, whereas VLN agents interact with 3D environments. In VLN, pretraining on large image-text and text-only datasets has been thoroughly explored [10]}, [11]}, [12]}, [13]}, but improvements are more limited. [14]} argue that progress in VLN has plateaued while still leaving a large gap between machine and human performance.
We hypothesize that static image-text and text-only datasets – despite their size – lack the spatially grounded and action-oriented language needed for effective VLN pretraining.
Consider instructions from the Room-across-Room (RxR) dataset [15]}, which illustrate that successful wayfinding requires an understanding of allocentric and egocentric spatial expressions (near a grey console table behind you), verbs (climb the stairs), imperatives and negations (do not enter the room in front) and temporal conditions (walk until you see an entrance on your left). Such expressions are rarely found in image-text datasets. Though similar expressions are found in text-only corpora, their meaning as it relates to the physical world is hard to infer from text alone (without sensorimotor context) [16]}.
| i |
ade3f9a8-5014-4d12-ad2b-960b289ab45a | To address this problem, we investigate large-scale augmentation with synthetic in-domain data, i.e., model-generated navigation instructions for trajectories in realistic 3D environments.
For this, we construct a large dataset using Marky [1]}, which generates VLN instructions that approach the quality of human instructors.
[1]} released the 1M Marky instruction-trajectory pairs situated in 61 Matterport3D [3]} environments.
To increase the diversity of the environments (and thus the scenes and objects available in them),
we automatically annotate an additional 491 environments from the Gibson dataset [4]}.
Gibson environments have been underutilized in prior VLN work due to the lack of navigation graphs indicating navigable trajectories through its densely-sampled 360 panoramas. We train a model that classifies navigable directions for Matterport3D and use it to construct the missing navigation graphs. We sample 3.2M trajectories from these graphs and annotate them with Marky . To further increase the variability of trajectories, we synthesize image observations from novel viewpoints using an image-to-image GAN [5]}. The resulting dataset is two orders of magnitude larger than existing human-annotated ones, and contains a wider variety of scenes and viewpoints.
| i |
cff40539-d3b1-4fa3-8799-47d476a4e26e | With orders of magnitude more training examples and environments, we explore VLN agent performance with imitation learning (IL), i.e., behavioral cloning and DAGGER [1]}. IL can take advantage of high-throughput transformer frameworks such as T5 [2]} and thus efficiently train on 4.2M instructions (accumulating over 700M steps of experience). This is a major departure from prior VLN work in low-data settings, e.g. [3]} report that pure IL underperforms by 8.5% success rate compared to agents trained with both IL and online reinforcement learning (RL) algorithms such as A3C [4]}. However, IL outperforms RL in related tasks with sufficient training data [5]}. Online RL also requires interacting with the environment at each step; this precludes efficient data prefetching and parallel preprocessing and thus imposes unavoidable overhead compared to IL.
Empirically, we confirm that training existing models such as HAMT [3]} on 4.2M instructions is infeasible without ground-up re-engineering, though we do find incorporating 10K additional synthetic instructions into HAMT training modestly improves performance.
Training with IL aligns with the trend towards large-scale multi-task vision-and-language models trained with supervised learning; these have unified tasks as diverse as visual question answering, image captioning, object detection, image classification, OCR and text reasoning [7]} – and could include VLN in future.
| i |
2bec8f3b-4ff6-4821-b852-51fa87317219 | Experimentally, in detailed ablation studies we show that adding Gibson environments, synthesizing additional image observations from novel viewpoints, increasing the capacity of the transformer, and finetuning with DAGGER all improve agent performance. On the challenging RxR dataset – which contains multilingual instructions with a median trajectory length of 15m – our best agent using only imitation learning outperforms all prior RL agents. Evaluating on novel instruction-trajectories in seen environments (Val-Seen), we improve over the state-of-the-art by 8%, reaching 79.1 NDTW. In new, unseen environments (Test), we improve by 2%, achieving 66.8 NDTW. We also show that that self-training with synthetic instructions in new environments (still without human annotations) improves performance by an additional 2% to 68.6 NDTW. Overall, our RxR results point to a new path to improving instruction-following agents, emphasizing large-scale imitation learning and the development of synthetic instruction generation capabilities. Perhaps surprisingly, on the English-only R2R dataset [1]}, our imitation-learning agent achieves strong performance but not state-of-the-art. Marky was trained on RxR, so we attribute this to domain differences between R2R and RxR, underscoring the domain dependence of synthetic instructions.
| i |
45bfa9e0-a64e-4216-b5aa-da6809354bbb | Substantial gains in the state-of-the-art on the challenging RxR dataset, demonstrating that advances in instruction generation flow directly into instruction-following performance.
Related Work
Vision-and-Language Navigation Agents that attempt to follow instructions by navigating to a prescribed location were initially studied in simple environments requiring limited or no perception, using instructions that were often procedurally generated [1]}, [2]}, [3]}, [4]}, [5]}. More recent work has included settings based on photorealistic 3D environments and natural language instructions [6]}, [7]}, using environments such as Matterport3D [8]} and Streetview [9]}. This instantiation of the problem, known as Vision-and-Language Navigation (VLN), raises the prospect of sim-to-real transfer to physical robots [10]}, and encouraged further datasets exploring dialog [11]}, [12]}, search for objects [13]} and multilingual instructions [14]}.
Pretraining and Transfer Learning The use of realistic imagery and language in VLN, combined with the cost of collecting human instruction annotations, leads to a natural focus on pretraining and transfer learning to improve performance. [15]} formulate VLN as an instruction-trajectory alignment problem, and initialize a transformer model using pretrained BERT weights [16]} then perform additional pretraining on image-text pairs from Conceptual Captions [17]}. [18]} also use a BERT model, although more recent approaches have favored learned text encoders from scratch by pretraining with Masked Language Modeling (MLM) and related objectives on instruction-trajectory data [19]}, [20]}. In terms of image representations, early work [21]} used ResNet features [22]} pretrained on ImageNet [23]}, although pretrained object detectors have also been explored [24]}, [15]}, [26]} (typically a Faster-RCNN [27]}). More recently, [20]} use a vision transformer (ViT) [29]} and current-state-of-the art agents [30]}, [31]} use CLIP [32]},
obtaining improvements over similarly sized encoders pretrained on ImageNet. However, although pretraining and transfer learning from large text and image-text datasets has been thoroughly explored, a significant gap to human performance remains.
Data Augmentation [21]} were the first to show that performance following human instructions could be improved by augmenting training with synthetic (model-generated) instructions. However, in human wayfinding evaluations, the instructions used [21]}, [35]} were shown to be surprisingly weak, being poorly grounded and mostly unfollowable by people [36]}. To reduce hallucinations and improve instruction quality, [37]} proposed Marky , a stronger instruction generator trained with text-aligned visual landmark correspondences. We are the first to investigate data augmentation with Marky , which we compare to [21]} (Marky is superior).
On the other hand, [35]} and [31]} perform data augmentation by modifying existing environments before generating new instructions (an approach which may be complimentary to ours).
[41]} train on a synthetic dataset of path-instruction pairs generated using online rental listings. [42]} train a generative model that infills and outpaints spatially perturbed panos of indoor environments (see Section ).
Approach
Problem set up The agent is instantiated in an environment and must follow a natural language instruction \({\cal W}\) . At time step \(t\) , the agent receives observation \(o_t\) and chooses action \(a_t\) that transitions it from state \(s_t\) to new state \(s_{t+1}\) . Following prior work, each observation is a photorealistic panoramic image (hereafter, pano) encoded as 36 image feature vectors \(o_t{=}\lbrace I^o_{t,1}, I^o_{t,2}, ... , I^o_{t,K}\rbrace \) . These features are extracted from perspective projections at 36 view angles (12 headings \(\times \) 3 elevations at 30 intervals).
The agent moves by choosing an action \(a_t\) from a set of candidates \({\cal A}_t{=}\lbrace I^a_{t,1}, I^a_{t,2}, ... , I^a_{t,J}\rbrace \) given by the environment. Action candidates are determined by the adjacent panos in a predefined navigation graph; each is represented by the image feature vector extracted from the perspective projection looking towards the adjacent pano.
Selecting an action teleports the agent a few meters to the new pano. Alternatively, the agent can choose `STOP' to end the episode. On average agents have 5 actions available at each step, including `STOP'. See [6]} for more details.
<FIGURE>Agent architecture Our imitation-learning agent is a transformer encoder which predicts the next action \(a_{t+1}\) by jointly combining all four input modalities: the instruction text \({\cal W}\) , the history of observations \(o_{1:t-1}\) and actions \(a_{1:t-1}\) , the current observation \(o_t\) , and the action candidates \({\cal A}_t\) (see Figure REF ). At each step, all input features are concatenated into a single multimodal sequence with no attention masking, allowing every input to attend to every other input. For biasing interactions between different input modalities we include learned attention biases for each pair of input types, e.g. the instruction and the observation/action history. Like HAMT [20]}, our approach is not autoregressive: every forward pass predicts a single action using the full history. Given our emphasis on data augmentation, we name our agent MARVAL for Maximum Augmentation Regime for Vision And Language navigation. Our implementation is based on mT5 [45]}, a multilingual variant of the T5 transformer architecture [46]}.
Image features As noted above, pano observations \(o_t\) and action candidates \({\cal A}_t\) are represented with sets of image features. We use precomputed, fixed 640-d features from MURAL-large [47]}, an EfficientNet-B7 [48]} backbone trained on 1.8B multilingual image-text pairs and 6B translation pairs. MURAL's image encoder's representational power is similar to CLIP's [32]}, which is used in previous work [30]}, [20]} and is trained on 400M English image-text pairs with a VIT [29]} backbone.
To provide orientation information, each feature is combined with two learned embeddings: an absolute direction embedding capturing the feature's orientation in the environment's fixed coordinate system, and a relative direction embedding based on orientation relative to the agent's heading. Note that the agent's initial heading at \(t{=}0\) is given by the dataset, and is typically random. We also augment the set of action candidates \({\cal A}_t\) with a `STOP' action. This is convenient for modeling action classification over the candidates (refer `Action classification', below) and is represented by a zero image vector with unique direction embeddings. We use 37 absolute and relative direction embeddings, and snap features to the closest.
Instruction encoding The instruction \({\cal W}\) is encoded as a sequence of WordPiece [53]} tokens using the mT5 vocabulary which supports up to 101 languages via a SentencePiece [54]} model trained on mC4. Following T5, position information within the instruction is derived from relative position biases applied to the transformer's attention logits.
History encoding The history of agent observations \(o_{1:t-1}\) and actions \(a_{1:t-1}\) is computationally expensive to process, since each pano observation \(o_t\) is comprised of 36 feature vectors. Similar to [20]} we embed the 36 features from each previous pano observation into a single vector, based on the mean-pooled output of a separate transformer applied to the image features and their direction embeddings. This is added to the action candidate selected at each previous step. Position information for the state and action history is provided by relative position biases.
Pretraining We train the agent in two stages. We first pretrain on a large dataset of instruction-trajectory pairs, including both model-generated instructions and trajectories containing synthesized image observations from novel viewpoints (refer Section ). We then finetune on a single dataset of human-annotated instruction-trajectory pairs to maximize performance on that dataset. Unlike [15]} and [57]}, our transformer weights are initialized from scratch – we do not use any image-caption datasets or text corpora to train the transformer. Since the model is not autoregressive, each training trajectory is broken down into \(T\) training examples, where \(T\) is the number of time steps in the trajectory. Each training example requires the model to predict the next action for a single step in a trajectory, given the full instruction \({\cal W}\) , the action history \(a_{1:t-1}\) , the observation history \(o_{1:t-1}\) , the current observation \(o_t\) and the set of action candidates \({\cal A}_t\) . To increase the amount of supervision, during pretraining we combine four tasks:
Masked language modeling (MLM) [16]}: 15% of tokens in the instruction are masked, with all consecutive spans of masked tokens replaced by a single MASK token. Similar to [20]}, the model predicts the masked tokens using the surrounding text and visual clues from the observation/action history and the current observation.
Progress prediction: A small MLP is added to the output representation of the CLS token (a special symbol capturing the fused representation of the entire sequence) to predict the proportion of the trajectory that is completed (based on 20 discretized classes). Progress monitoring has been shown to improve instruction grounding [60]}.
Constrained action prediction: A classification task to predict the correct action from the constrained set of available action candidates \({\cal A}_t\) . Since action candidates are inputs to the encoder (refer Figure REF ), we compute the logit for each action as a learned projection of its output representation and normalize with softmax (a simplification of [20]}).
Unconstrained action prediction: A second small MLP is added to the CLS output to directly predict the next action from all 36 discretized agent-relative directions or `STOP'. Hence, these predictions are not constrained to \({\cal A}_t\) , similar to the approach in [19]}. The constrained and unconstrained action prediction tasks are highly related but complimentary; in early experiments we found that equally weighting the logits from both improves accuracy by 1-2%, so we adopt this approach in all experiments.
Finetuning (behavioral cloning) During finetuning we adapt our pretrained agent for best performance on a smaller human-annotated dataset. We update only the WordPiece embeddings in the agent and keep all other transformer weights frozen, as this makes finetuning more stable and less prone to overfitting (especially on the smaller R2R dataset). We consider two finetuning strategies. The first is behavioral cloning. In this setting, we simply drop the instruction text masking and the MLM objective, retaining the progress prediction and constrained and unconstrained action prediction losses used in pretraining. We then finetune the agent to predict the next action at each step along ground-truth trajectories, treating imitation learning as supervised learning.
DAGGER training The main weakness of behavioral cloning is that the state distribution seen in training differs from the state distribution induced by the agent during inference [63]}. Previous works [20]}, [35]} report substantial improvements by combining behavioral cloning with online reinforcement learning algorithms such as A3C [66]}. We use DAGGER [63]} to help train the agent to better recover from errors, since it is simple to implement and requires no environment interaction during training. In DAGGER , during each iteration of finetuning the dataset is augmented with trajectories of states visited by the current agent policy and actions given by an expert. Figure REF explains the calculation of expert actions. We find that most of the gains are captured in a single DAGGER iteration.
<FIGURE>Pre-Exploration While most of the focus in VLN is on instruction-following in new, unseen environments, in reality environments persist over time providing opportunities for pre-exploration. Similar to [35]}, [69]} we consider a pre-exploration setting in which the agent may explore unseen environments with self-supervision before evaluation. Our synthetic-instruction approach is readily applicable to this scenario; we simply sample paths from the Val-Unseen or Test environments, annotate them with Marky instructions, and include them in the training data.
Datasets and Augmentation
The datasets used for training and evaluation are described below and summarized in Table REF .
Room-to-Room (R2R) [6]} consists of 22K human-annotated English language navigation instructions, each describing a trajectory that traverses multiple rooms in Matterport3D [8]}. This was the first dataset to use a photo-realistic environment for the instruction guided navigation task. R2R trajectories average 10m, and the trajectories are always the shortest path between the start point and the goal.
Room-across-Room (RxR) [14]} introduce RxR, a larger human-annotated dataset containing 126K instructions in English, Hindi and Telugu. To mitigate goal seeking behaviour and to ensure that agents are faithful to the instruction, RxR includes Matterport3D trajectories that are diverse in terms of length (average is 15m) and the landmarks that are referred to, and it also includes trajectories that do not go directly to the goal.
Speaker-Matterport (S-MP) [21]} train an LSTM [74]} Speaker model on ground-truth navigation instructions from R2R, and use this model to generate synthetic instructions for a set of newly sampled trajectories in Matterport3D environments, generating 178k new instruction-path pairs.
Marky-Matterport (M-MP) Marky is a landmark-aware multilingual instruction generator trained on RxR. [37]} use it to generate 1M instructions in English, Hindi and Telugu for 330K sampled Matterport3D trajectories.
In human wayfinding evaluations in unseen environments Marky achieves close to human performance on shortest-path trajectories (e.g., R2R's paths). On the more challenging RxR paths a gap remains: human wayfinders obtain a 62% success rate with Marky vs. 78% with human instructions.
Marky-Gibson (M-Gib) The Gibson [76]} dataset consists of 572 indoor 3D environments. Despite its large size compared to Matterport3D, prior work has underutilized Gibson data for training VLN agents. This is primarily due to a lack of navigation trajectories and instruction annotations. To alleviate this, and unlock the Gibson dataset for VLN training, we propose an automated process to label these environments with high quality navigation graphs, described in detail below. We then sample 1.3M trajectories and annotate them with 3.2M Marky instructions in English, Hindi and Telugu.
<TABLE>Gibson navigation graphs In the standard VLN setting, agents are trained and evaluated using panos as observations (refer Section ). Movement in the environment requires a graph with panos as nodes and edges indicating navigability. Navigation graphs for Matterport3D were generated by [6]}, using a semi-automated process combined with human visual inspection. However, there are no navigation graphs for Gibson environments and the size of the dataset precludes human inspection. We therefore train a model on panos and navigation graphs from the Matterport3D train split to classify whether a patch of pixels in a pano constitutes a navigable direction. The model is based on RedNet [78]}, an RGB-D encoder-decoder first proposed for image segmentation, using a ResNet-50 [22]} backbone. The output space is discretized into \(8 \times 16 \times 5\) pitch, heading and distance buckets. During training each bucket is assigned a positive value if the corresponding location corresponds to a navigable node, and 0 otherwise.
To compute Gibson navigation graphs, we combine model edge predictions with obstacle information from the dataset's 3D meshes. We add an edge between pano nodes \(i\) and \(j\) if the following boolean expression evaluates to true:
\(e(i, j)=(\lambda _d \frac{g_{i,j}}{s_{i,j}} - \lambda _p p_{i,j} \le 1) \wedge (s_{i,j} \le 3.5) \wedge (|z_i - z_j| \le 3)\)
where \(g_{i,j}\) is the geodesic distance (accounding for obstacles) between nodes \(i\) and \(j\) calculated using the Habitat Simulator [80]}, \(s_{i,j}\) is the straight-line Euclidean distance between nodes \(i\) and \(j\) , \(p_{i,j}\) is the model probability of an edge connecting nodes \(i\) and \(j\) , \(z_i\) is the vertical coordinate of pano \(i\) , and \(\lambda _d\) and \(\lambda _p\) are weighting parameters. The first term captures model predictions and encourages edges between panos that have few intervening obstacles. The second term ensures that nodes are within 3.5m, and the third term ensures that nodes are within 3m in the vertical axis. Finally, to ensure that the navigation graph for each environment is fully connected, we compute the minimum spanning tree (MST) [81]} of the graph with the edge weights given by the first term in Equation REF , and apply a logical `OR' operation over \(e(i,j)\) and the MST.
To set the weighting parameters \(\lambda _d\) and \(\lambda _p\) , we perform grid search to maximize the \(F_1\) score when predicting edges in Matterport3D val environments. Our approach achieves
an \(F_1\) score of 0.70, precision of 0.695, and recall of 0.713. The average edge length in the generated Gibson graphs is 3.02m (median of 2.06m), and the average node degree is 4.15 (median of 4).
Trajectory sampling and instruction generation Using the generated navigation graphs, we sample trajectories from 491 Gibson train and val environments (we do not use test environments).
Unlike Matterport3D, Gibson lacks room annotations, which precludes us from using the two-step sampling approach from RxR. Instead, we use a simpler approach: we randomly sample 3 panos, and use a TSP solver to find the shortest path that visits all 3 panos. Trajectories longer than 40m or 16 steps are discarded, and no more than 3K paths are sampled per environment. This procedure generates 1.06M paths, with an average of 7.1 steps and length of 19.3m. Using Marky we annotate each trajectory with English, Hindi and Telugu instructions to create the Marky-Gibson dataset.
Synthesizing image observations with SE3DS One weakness of training VLN agents on pano images is that training trajectories are constrained to the locations of the captured images. VLN agents tend to overfit to these trajectories [82]}, contributing to a performance drop in unseen environments. [83]}, [42]} showed that a strong generative model is capable of successfully rendering high resolution panos from novel viewpoints, and that training VLN agents with spatially-perturbed panos could improve the success rate of the agent on R2R Val-Unseen by 1.5%. To assess if this approach is complimentary to instruction augmentation, we use the proposed SE3DS (Simple and Effective 3D Synthesis) model
to augment panoramas from the Matterport environments. Following [42]}, we create 200 variations of each environment which are randomly sampled during training. In each environment variation, with 50% probability a pano will be spatially-perturbed by up to 1.5m and re-rendered at the new location using SE3DS.
Experiments
<TABLE>Pretraining settings In Table REF we explore pretraining using varying amounts and types of augmented data. During pretraining, we monitor one-step action prediction accuracy on ground-truth trajectories using held-out instructions from RxR and R2R Val-Unseen. Each setting is trained until convergence, requiring more iterations (Its) for larger models and datasets. We select the best checkpoint based on one-step prediction accuracy then perform a full evaluation using standard VLN path-fidelity metrics [86]}, [87]}: Navigation Error (NE \(\downarrow \) , the average distance in meters between the agent's final position and the goal), Success Rate (SR\(\uparrow \) , the proportion of trajectories with NE \(<\) 3m), Success rate weighted by normalized inverse Path Length (SPL \(\uparrow \) ), normalized dynamic time warping (NDTW\(\uparrow \) ), and success weighted DTW (SDTW\(\uparrow \) ).
Speaker vs. Marky Consistent with previous work, we find that data augmentation with synthetic instructions from the [21]} Speaker model improves performance (row 2 vs. 1, +2% SR on RxR and +8% on R2R), but instructions from Marky [37]} are far more effective (row 4 vs. 1, +27% SR on RxR and +17% on R2R). This is consistent with human evaluations of instruction quality, confirming that improvements in instruction-generation flow through to instruction-following performance. Interestingly, we find that combining the Speaker model with Marky leads to worse performance on both RxR and R2R (row 3 vs. 4, and also row 8 vs. 9), which we attribute to the introduction of noise from the lower-quality Speaker instructions.
Gibson, SE3DS and model size Augmentation with Marky instructions in Gibson environments (row 6 v. 3) provides a substantial boost (+11% SR on RxR and +12% on R2R), suggesting that the returns from scaling synthetic instructions to more environments are not exhausted. Using SE3DS to synthesize image observations from novel viewpoints improves +6% SR on RxR and +12% on R2R (row 5 vs. 3), but this benefit is substantially reduced (+0% SR on RxR and +2% on R2R, row 7 vs. 6) if Gibson is included, presumably because new environments also increase viewpoint variety. Most experiments use the mT5-base [45]} model; switching to mT5-large provides a further performance boost (+2% SR on RxR and +1% on R2R, row 8 vs. 7). Our best pretraining results on both RxR and R2R are achieved using an mT5-large model with all the previously mentioned data, but leaving out the Speaker instructions (row 9). We use this checkpoint in all finetuning experiments. This agent pretrains for 5.14M iterations, which, using a batch size of 128, represents over 650M steps of experience (over 700M including finetuning).
<TABLE><TABLE>Finetuning In Tables REF and REF we compare results for our MARVAL agent after finetuning to previous work on the RxR and R2R datasets. On both datasets, finetuning with behavioral cloning on just human-annotated data (Finetuned-BC) substantially improves the pretrained model. The improvement from using DAGGER over behavioral cloning is small but consistent. On the RxR dataset, MARVAL outperforms all prior work. Evaluating on novel instruction-trajectories in seen environments (Val-Seen), we improve over the state-of-the-art by 8%, reaching 79.1 NDTW. In new, unseen environments (Test), we improve by 2%, achieving 66.8 NDTW. Self-training with Marky synthetic instructions in the Test environments (a form of privileged access, but still without human annotations) improves performance by an additional 2% to 68.6 NDTW.
RxR vs. R2R On the English-only R2R dataset (Table REF ), MARVAL achieves strong performance but not state-of-the-art. Surprisingly, the Val-Unseen success rate (SR) of 64.8% is the same for both RxR and R2R, whereas typically RxR performance is lower since the trajectories are longer and more varied. Noting that Marky was trained on RxR, we attribute lower relative performance on R2R to domain differences between R2R and RxR. While the average length of instructions in R2R is 26 words, RxR has an average of 87 words — 3 times more. This is partly because RxR instructions are more verbose, often describing objects in more detail and including state verification. Further, cultural differences arising from the data collection process (annotators from USA or from India) may also contribute to the distribution shift due to subtle differences in the vocabulary and structure of language used to form the instructions.
We note however, that while our augmentation approach focuses on scaling up in terms of high quality instructions, EnvEdit [31]} focuses on generalization through augmentation of visual features. These two approaches are likely to be complementary and can provide further improvements on both R2R and RxR.
Conclusion
To our knowledge, this is the first time an imitation learning agent has achieved state-of-the-art or even competitive results on the RxR or R2R benchmarks. This result paves a new path towards improving instruction-following agents, emphasizing large-scale imitation learning with generic architectures, along with a focus on developing synthetic instruction generation capabilities – which are shown to flow through directly to improved instruction-following performance.
Appendix
Limitations and Future Work
As we discuss in the main paper, our approach achieves strong but not state-of-the-art results on the R2R dataset, which
we attribute to domain differences between R2R and RxR (noting that the Marky instruction generator we use for data augmentation was trained on RxR). One way to address this limitation would be by re-training Marky on R2R data, although this would face some hurdles since R2R lacks the annotator pose traces that were used by Marky when training on RxR.
To better understand the failure modes of our approach, in Figure REF we plot a distribution indicating the step in the trajectory where an agent makes its first error. We analyze the RxR Val-Unseen split and compare MARVAL to the previous state-of-the-art approaches, EnvEdit and HAMT, as well as human instruction-following demonstrations from the RxR dataset. MARVAL makes fewer errors than the prior approaches, especially at the start of the trajectory, but also fewer errors than human followers. Since human followers still significantly outperform MARVAL overall – in terms of navigation error (0.79m vs. 4.49m), success rate (94.5 vs. 64.8) and path-fidelity metrics such as NDTW (81.8 vs. 70.8) – this suggests the main focus for future agent improvement should be on recovering from errors, which human wayfinders clearly do extremely well in order to still reach the goal in 94.5% of episodes.
<FIGURE>
Implementation Details
Pretraining In all experiments we train with a batch size of 128 using the AdaFactor optimizer. During pretraining, we use dropout of 0.1 and a learning rate that exponentially-decays from 0.1. We monitor one-step action prediction accuracy on ground-truth trajectories using held-out instructions from unseen environments (RxR Val-Unseen and R2R Val-Unseen). We pretrain until convergence and then select the best snapshot based on one-step action prediction accuracy on RxR Val-Unseen.
Finetuning During finetuning, we use a constant learning rate of 0.001 and dropout of 0.2. Since the human-annotated datasets used for finetuning (RxR and R2R) are relatively small, during finetuning we update only the WordPiece embeddings in the agent and keep all other transformer weights frozen. This makes finetuning more stable and less prone to overfitting (especially on the smaller R2R dataset). We finetune for a maximum of 150K iterations while monitoring standard VLN path-fidelity metrics such as success rate (SR). We select the best snapshot based on SR on Val-Unseen.
Pretraining Results
In Table REF we report pretraining results on the Val-Seen splits, complementing the Val-Unseen results included in the main paper. We observe the same trends in the seen environments as in the new, unseen environments (Val-Unseen), although the relative improvement from using a larger model (row 8 vs. 7) is larger.
<TABLE>
Performance on different languages
<TABLE> | i |
b87b743e-aa4c-48af-b063-738485d39d29 | Vision-and-Language Navigation Agents that attempt to follow instructions by navigating to a prescribed location were initially studied in simple environments requiring limited or no perception, using instructions that were often procedurally generated [1]}, [2]}, [3]}, [4]}, [5]}. More recent work has included settings based on photorealistic 3D environments and natural language instructions [6]}, [7]}, using environments such as Matterport3D [8]} and Streetview [9]}. This instantiation of the problem, known as Vision-and-Language Navigation (VLN), raises the prospect of sim-to-real transfer to physical robots [10]}, and encouraged further datasets exploring dialog [11]}, [12]}, search for objects [13]} and multilingual instructions [14]}.
| w |
d4d576a4-0113-44b8-8c96-3cfbb02f079a | Pretraining and Transfer Learning The use of realistic imagery and language in VLN, combined with the cost of collecting human instruction annotations, leads to a natural focus on pretraining and transfer learning to improve performance. [1]} formulate VLN as an instruction-trajectory alignment problem, and initialize a transformer model using pretrained BERT weights [2]} then perform additional pretraining on image-text pairs from Conceptual Captions [3]}. [4]} also use a BERT model, although more recent approaches have favored learned text encoders from scratch by pretraining with Masked Language Modeling (MLM) and related objectives on instruction-trajectory data [5]}, [6]}. In terms of image representations, early work [7]} used ResNet features [8]} pretrained on ImageNet [9]}, although pretrained object detectors have also been explored [10]}, [1]}, [12]} (typically a Faster-RCNN [13]}). More recently, [6]} use a vision transformer (ViT) [15]} and current-state-of-the art agents [16]}, [17]} use CLIP [18]},
obtaining improvements over similarly sized encoders pretrained on ImageNet. However, although pretraining and transfer learning from large text and image-text datasets has been thoroughly explored, a significant gap to human performance remains.
| w |
b5d7638a-71d3-4550-bd99-36340a9e1151 | Data Augmentation [1]} were the first to show that performance following human instructions could be improved by augmenting training with synthetic (model-generated) instructions. However, in human wayfinding evaluations, the instructions used [1]}, [3]} were shown to be surprisingly weak, being poorly grounded and mostly unfollowable by people [4]}. To reduce hallucinations and improve instruction quality, [5]} proposed Marky , a stronger instruction generator trained with text-aligned visual landmark correspondences. We are the first to investigate data augmentation with Marky , which we compare to [1]} (Marky is superior).
On the other hand, [3]} and [8]} perform data augmentation by modifying existing environments before generating new instructions (an approach which may be complimentary to ours).
[9]} train on a synthetic dataset of path-instruction pairs generated using online rental listings. [10]} train a generative model that infills and outpaints spatially perturbed panos of indoor environments (see Section ).
| w |
c5319b1b-8c0a-4b82-8373-5cddca3d6126 | Pretraining settings In Table REF we explore pretraining using varying amounts and types of augmented data. During pretraining, we monitor one-step action prediction accuracy on ground-truth trajectories using held-out instructions from RxR and R2R Val-Unseen. Each setting is trained until convergence, requiring more iterations (Its) for larger models and datasets. We select the best checkpoint based on one-step prediction accuracy then perform a full evaluation using standard VLN path-fidelity metrics [1]}, [2]}: Navigation Error (NE \(\downarrow \) , the average distance in meters between the agent's final position and the goal), Success Rate (SR\(\uparrow \) , the proportion of trajectories with NE \(<\) 3m), Success rate weighted by normalized inverse Path Length (SPL \(\uparrow \) ), normalized dynamic time warping (NDTW\(\uparrow \) ), and success weighted DTW (SDTW\(\uparrow \) ).
| m |
0e4cac87-be98-4b49-ac7c-2fa3db64444c | Speaker vs. Marky Consistent with previous work, we find that data augmentation with synthetic instructions from the [1]} Speaker model improves performance (row 2 vs. 1, +2% SR on RxR and +8% on R2R), but instructions from Marky [2]} are far more effective (row 4 vs. 1, +27% SR on RxR and +17% on R2R). This is consistent with human evaluations of instruction quality, confirming that improvements in instruction-generation flow through to instruction-following performance. Interestingly, we find that combining the Speaker model with Marky leads to worse performance on both RxR and R2R (row 3 vs. 4, and also row 8 vs. 9), which we attribute to the introduction of noise from the lower-quality Speaker instructions.
| m |
fc3f2c27-23f0-491a-bfb9-c0646abd583c | Gibson, SE3DS and model size Augmentation with Marky instructions in Gibson environments (row 6 v. 3) provides a substantial boost (+11% SR on RxR and +12% on R2R), suggesting that the returns from scaling synthetic instructions to more environments are not exhausted. Using SE3DS to synthesize image observations from novel viewpoints improves +6% SR on RxR and +12% on R2R (row 5 vs. 3), but this benefit is substantially reduced (+0% SR on RxR and +2% on R2R, row 7 vs. 6) if Gibson is included, presumably because new environments also increase viewpoint variety. Most experiments use the mT5-base [1]} model; switching to mT5-large provides a further performance boost (+2% SR on RxR and +1% on R2R, row 8 vs. 7). Our best pretraining results on both RxR and R2R are achieved using an mT5-large model with all the previously mentioned data, but leaving out the Speaker instructions (row 9). We use this checkpoint in all finetuning experiments. This agent pretrains for 5.14M iterations, which, using a batch size of 128, represents over 650M steps of experience (over 700M including finetuning).
<TABLE><TABLE> | m |
58303d38-a7cc-4d08-83a9-f93dccc2e620 | Finetuning In Tables REF and REF we compare results for our MARVAL agent after finetuning to previous work on the RxR and R2R datasets. On both datasets, finetuning with behavioral cloning on just human-annotated data (Finetuned-BC) substantially improves the pretrained model. The improvement from using DAGGER over behavioral cloning is small but consistent. On the RxR dataset, MARVAL outperforms all prior work. Evaluating on novel instruction-trajectories in seen environments (Val-Seen), we improve over the state-of-the-art by 8%, reaching 79.1 NDTW. In new, unseen environments (Test), we improve by 2%, achieving 66.8 NDTW. Self-training with Marky synthetic instructions in the Test environments (a form of privileged access, but still without human annotations) improves performance by an additional 2% to 68.6 NDTW.
| m |
c20aba7d-59a2-471e-a8dd-4073437aeeb9 | RxR vs. R2R On the English-only R2R dataset (Table REF ), MARVAL achieves strong performance but not state-of-the-art. Surprisingly, the Val-Unseen success rate (SR) of 64.8% is the same for both RxR and R2R, whereas typically RxR performance is lower since the trajectories are longer and more varied. Noting that Marky was trained on RxR, we attribute lower relative performance on R2R to domain differences between R2R and RxR. While the average length of instructions in R2R is 26 words, RxR has an average of 87 words — 3 times more. This is partly because RxR instructions are more verbose, often describing objects in more detail and including state verification. Further, cultural differences arising from the data collection process (annotators from USA or from India) may also contribute to the distribution shift due to subtle differences in the vocabulary and structure of language used to form the instructions.
We note however, that while our augmentation approach focuses on scaling up in terms of high quality instructions, EnvEdit [1]} focuses on generalization through augmentation of visual features. These two approaches are likely to be complementary and can provide further improvements on both R2R and RxR.
| m |
cf3cb9d5-67be-4fa2-8860-55bfe89c9515 | To our knowledge, this is the first time an imitation learning agent has achieved state-of-the-art or even competitive results on the RxR or R2R benchmarks. This result paves a new path towards improving instruction-following agents, emphasizing large-scale imitation learning with generic architectures, along with a focus on developing synthetic instruction generation capabilities – which are shown to flow through directly to improved instruction-following performance.
| d |
dd7896e6-007f-46bf-9f88-df99379c6e32 | Massive Open Online Courses (MOOCs) have revolutionized education by offering a wide range of educational and professional training. However, an important issue in such a MOOC setup is to ensure an efficient student examination setup. Testing with quiz questions has proven to be an effective tool, which can help both learning and student retention [1]}. Yet, preparing such questions is a tedious and time-consuming task, which can take up to 50% of an instructor's time [2]}, especially when a large number of questions are needed in order to prevent students from memorizing and/or leaking the answers.
| i |