forum_id
stringlengths
8
20
forum_title
stringlengths
1
899
forum_authors
sequencelengths
0
174
forum_abstract
stringlengths
0
4.69k
forum_keywords
sequencelengths
0
35
forum_pdf_url
stringlengths
38
50
forum_url
stringlengths
40
52
note_id
stringlengths
8
20
note_type
stringclasses
6 values
note_created
int64
1,360B
1,737B
note_replyto
stringlengths
4
20
note_readers
sequencelengths
1
8
note_signatures
sequencelengths
1
2
venue
stringclasses
349 values
year
stringclasses
12 values
note_text
stringlengths
10
56.5k
QDm4QXNOsuQVE
k-Sparse Autoencoders
[ "Alireza Makhzani", "Brendan Frey" ]
Recently, it has been observed that when representations are learnt in a way that encourages sparsity, improved performance is obtained on classification tasks. These methods involve combinations of activation functions, sampling steps and different kinds of penalties. To investigate the effectiveness of sparsity by itself, we propose the k-sparse autoencoder, which is a linear model, but where in hidden layers only the k highest activities are kept. When applied to the MNIST and NORB datasets, we find that this method achieves better classification results than denoising autoencoders, networks trained with dropout, and restricted Boltzmann machines. k-sparse autoencoders are simple to train and the encoding stage is very fast, making them well-suited to large problem sizes, where conventional sparse coding algorithms cannot be applied.
[ "autoencoders", "sparsity", "representations", "way", "performance", "classification tasks", "methods", "combinations", "activation functions", "steps" ]
https://openreview.net/pdf?id=QDm4QXNOsuQVE
https://openreview.net/forum?id=QDm4QXNOsuQVE
PPfbPOaNgVPWE
review
1,391,728,860,000
QDm4QXNOsuQVE
[ "everyone" ]
[ "anonymous reviewer b245" ]
ICLR.cc/2014/conference
2014
title: review of k-Sparse Autoencoders review: In this paper, the authors propose a new sparse autoencoder. At training time, the input vector is projected onto a set of filters to produce code values. These code values are sorted and only the top k values are retained, the rest is set to 0 to achieve an exact k-sparse code. Then, the code is used to reconstruct the input by multiplying this code by the transpose of the encoding matrix. The parameters are learned via backpropagation of the squared reconstruction error. The authors relate this algorithm to sparse coding and demonstrate its effectiveness in terms of classification accuracy on the MNIST and NORB datasets. The paper is fairly incremental in its novelty. There are several other papers that used similar ideas. Examples of the most recent ones: - R. K. Srivastava, J. Masci, S. Kazerounian, F. Gomez, J. Schmidhuber. Compete to Compute. In Proc. Neural Information Processing Systems (NIPS) 2013, Lake Tahoe. - Smooth Sparse Coding via Marginal Regression for Learning Sparse Representations K Balasubramanian, K Yu, G Lebanon Proceedings of the 30th International Conference on Machine Learning 2013 The theoretical analysis is good but straightforward. What worries me the most is that a very important detail of the method (how to prevent 'dead' units) is only slightly mentioned. The problem is that the algorithm guarantees to find k-sparse codes for every input but not to use different codes for different inputs. As the code becomes more and more overcomplete (and filters more and more correlated), there will be more and more units that are not used making the algorithm rather inefficient. The authors propose to have a schedule on 'k' but that seems rather hacky to me. What happens if the authors use twice as many codes? what happens if they use 4 times as many codes? My guess is that it will break down easily. My intuition is that this is a very simple and effective method when the code has about the same dimensionality of the input but it is less effective in overcomplete settings. This is an issue that can be rather important in practical applications and that should be discussed and better addressed. Pros: - simplicity - clearly written paper Cons: - lack of novelty (see comments above) - lack of comparison - I would add a comparison to A. Coates method and to K. Gregor's LISTA (or K. Kavukcuoglu's PSD) (software for these methods is publicly available). These are the most direct competitors of the proposed method because they also try to compute a good and fast approximation to sparse codes. - the method is a bit flawed because it does not control sparsity across samples (yielding to possibly many dead units). It would be very helpful to add experiments with a few different values of code dimensionality. For instance on MNIST, it would be interesting to try: 1000, 2000 and 5000. Overall, this is a nicely written paper proposing a simple method that seems to work fairly well. I have concerns about the novelty of this method and its robustness to highly overcomplete settings.
QDm4QXNOsuQVE
k-Sparse Autoencoders
[ "Alireza Makhzani", "Brendan Frey" ]
Recently, it has been observed that when representations are learnt in a way that encourages sparsity, improved performance is obtained on classification tasks. These methods involve combinations of activation functions, sampling steps and different kinds of penalties. To investigate the effectiveness of sparsity by itself, we propose the k-sparse autoencoder, which is a linear model, but where in hidden layers only the k highest activities are kept. When applied to the MNIST and NORB datasets, we find that this method achieves better classification results than denoising autoencoders, networks trained with dropout, and restricted Boltzmann machines. k-sparse autoencoders are simple to train and the encoding stage is very fast, making them well-suited to large problem sizes, where conventional sparse coding algorithms cannot be applied.
[ "autoencoders", "sparsity", "representations", "way", "performance", "classification tasks", "methods", "combinations", "activation functions", "steps" ]
https://openreview.net/pdf?id=QDm4QXNOsuQVE
https://openreview.net/forum?id=QDm4QXNOsuQVE
0Mj2MHxs2CA36
review
1,391,148,960,000
QDm4QXNOsuQVE
[ "everyone" ]
[ "Phil Bachman" ]
ICLR.cc/2014/conference
2014
review: This paper should be citing: 'Smooth Sparse Coding via Marginal Regression for Learning Sparse Representations', by Krishnakumar Balasubramanian, Kai Yu, and Guy Lebanon. In their work, they use a sparse coding subroutine effectively identical to your sparse coding method. The only difference in their encoding step is that they threshold the magnitude-sorted set of potential coefficients by cumulative L1 norm rather than cumulative L0 norm (a minor difference). They also add a regularization term to their objective designed to approximately minimize coherence of the learned dictionary, which may be worth trying as an addition to your current approach. This general class of techniques, i.e. marginal regression, is reasonably well-known and has been investigated previously as a quick-and-dirty approximation to the Lasso. For more detail, see: 'A Comparison of the Lasso and Marginal Regression', by Genovese et. al. I haven't looked at this paper in a while, but it should contribute to your theory, as they present results similar to yours, but in the context of approximating the Lasso in a linear regression setting. It might be interesting to try a 'marginalized dropout' encoding at test time, in which each coefficient is scaled by it's probability of being among the top-k coefficients when the full coefficient set is subject to, e.g., 50% dropout. This would correspond to a simple rescaling of each coefficient by its location in the magnitude-sorted list. The true top-k would still be included fully in the encoding, while coefficients outside the true top-k would quickly shrink towards 0 as you move away from the true top-k. The shrinkage factors could be pre-computed for all possible sorted list positions based on the CDF of a Binomial distribution. If 'true' sparsity is desired, the shrunken coefficients could be hard-thresholded at some small epsilon. This would 'smooth' the encoding, perhaps removing aliasing effects that may occur with a hard threshold. This would be a fairly natural encoding to use if dictionary elements were also subject to (unmarginalized) dropout at training time. The additional computational cost would be trivial, as shrinkage and thresholding would be applied simultaneously to the magnitude-sorted coefficient list with a single element-wise vector product. Paper 1: http://www.cc.gatech.edu/~lebanon/papers/ssc_icml13.pdf Paper 2: http://www.stat.cmu.edu/~jiashun/Research/Year/Marginal.pdf
QDm4QXNOsuQVE
k-Sparse Autoencoders
[ "Alireza Makhzani", "Brendan Frey" ]
Recently, it has been observed that when representations are learnt in a way that encourages sparsity, improved performance is obtained on classification tasks. These methods involve combinations of activation functions, sampling steps and different kinds of penalties. To investigate the effectiveness of sparsity by itself, we propose the k-sparse autoencoder, which is a linear model, but where in hidden layers only the k highest activities are kept. When applied to the MNIST and NORB datasets, we find that this method achieves better classification results than denoising autoencoders, networks trained with dropout, and restricted Boltzmann machines. k-sparse autoencoders are simple to train and the encoding stage is very fast, making them well-suited to large problem sizes, where conventional sparse coding algorithms cannot be applied.
[ "autoencoders", "sparsity", "representations", "way", "performance", "classification tasks", "methods", "combinations", "activation functions", "steps" ]
https://openreview.net/pdf?id=QDm4QXNOsuQVE
https://openreview.net/forum?id=QDm4QXNOsuQVE
IGP-jflb5FImc
comment
1,392,783,240,000
mOSRjNNtRhj91
[ "everyone" ]
[ "Alireza Makhzani" ]
ICLR.cc/2014/conference
2014
reply: Thank you very much for your feedbacks. -We will clarify all the ambiguities that you mentioned in the abstract and the algorithm box. This should improve the clarity of the manuscript. -Regarding your question about our reimplementation of the dropout and denoising autoencoder, we did the hyperparameter search for the dropout rate and the noise level. Further, we found that the reported results are consistent with those reported in the original papers.
QDm4QXNOsuQVE
k-Sparse Autoencoders
[ "Alireza Makhzani", "Brendan Frey" ]
Recently, it has been observed that when representations are learnt in a way that encourages sparsity, improved performance is obtained on classification tasks. These methods involve combinations of activation functions, sampling steps and different kinds of penalties. To investigate the effectiveness of sparsity by itself, we propose the k-sparse autoencoder, which is a linear model, but where in hidden layers only the k highest activities are kept. When applied to the MNIST and NORB datasets, we find that this method achieves better classification results than denoising autoencoders, networks trained with dropout, and restricted Boltzmann machines. k-sparse autoencoders are simple to train and the encoding stage is very fast, making them well-suited to large problem sizes, where conventional sparse coding algorithms cannot be applied.
[ "autoencoders", "sparsity", "representations", "way", "performance", "classification tasks", "methods", "combinations", "activation functions", "steps" ]
https://openreview.net/pdf?id=QDm4QXNOsuQVE
https://openreview.net/forum?id=QDm4QXNOsuQVE
55fTgHc4IMg4N
review
1,391,148,960,000
QDm4QXNOsuQVE
[ "everyone" ]
[ "Phil Bachman" ]
ICLR.cc/2014/conference
2014
review: This paper should be citing: 'Smooth Sparse Coding via Marginal Regression for Learning Sparse Representations', by Krishnakumar Balasubramanian, Kai Yu, and Guy Lebanon. In their work, they use a sparse coding subroutine effectively identical to your sparse coding method. The only difference in their encoding step is that they threshold the magnitude-sorted set of potential coefficients by cumulative L1 norm rather than cumulative L0 norm (a minor difference). They also add a regularization term to their objective designed to approximately minimize coherence of the learned dictionary, which may be worth trying as an addition to your current approach. This general class of techniques, i.e. marginal regression, is reasonably well-known and has been investigated previously as a quick-and-dirty approximation to the Lasso. For more detail, see: 'A Comparison of the Lasso and Marginal Regression', by Genovese et. al. I haven't looked at this paper in a while, but it should contribute to your theory, as they present results similar to yours, but in the context of approximating the Lasso in a linear regression setting. It might be interesting to try a 'marginalized dropout' encoding at test time, in which each coefficient is scaled by it's probability of being among the top-k coefficients when the full coefficient set is subject to, e.g., 50% dropout. This would correspond to a simple rescaling of each coefficient by its location in the magnitude-sorted list. The true top-k would still be included fully in the encoding, while coefficients outside the true top-k would quickly shrink towards 0 as you move away from the true top-k. The shrinkage factors could be pre-computed for all possible sorted list positions based on the CDF of a Binomial distribution. If 'true' sparsity is desired, the shrunken coefficients could be hard-thresholded at some small epsilon. This would 'smooth' the encoding, perhaps removing aliasing effects that may occur with a hard threshold. This would be a fairly natural encoding to use if dictionary elements were also subject to (unmarginalized) dropout at training time. The additional computational cost would be trivial, as shrinkage and thresholding would be applied simultaneously to the magnitude-sorted coefficient list with a single element-wise vector product. Paper 1: http://www.cc.gatech.edu/~lebanon/papers/ssc_icml13.pdf Paper 2: http://www.stat.cmu.edu/~jiashun/Research/Year/Marginal.pdf
QDm4QXNOsuQVE
k-Sparse Autoencoders
[ "Alireza Makhzani", "Brendan Frey" ]
Recently, it has been observed that when representations are learnt in a way that encourages sparsity, improved performance is obtained on classification tasks. These methods involve combinations of activation functions, sampling steps and different kinds of penalties. To investigate the effectiveness of sparsity by itself, we propose the k-sparse autoencoder, which is a linear model, but where in hidden layers only the k highest activities are kept. When applied to the MNIST and NORB datasets, we find that this method achieves better classification results than denoising autoencoders, networks trained with dropout, and restricted Boltzmann machines. k-sparse autoencoders are simple to train and the encoding stage is very fast, making them well-suited to large problem sizes, where conventional sparse coding algorithms cannot be applied.
[ "autoencoders", "sparsity", "representations", "way", "performance", "classification tasks", "methods", "combinations", "activation functions", "steps" ]
https://openreview.net/pdf?id=QDm4QXNOsuQVE
https://openreview.net/forum?id=QDm4QXNOsuQVE
gP7ACWWzucCnT
review
1,391,149,020,000
QDm4QXNOsuQVE
[ "everyone" ]
[ "Phil Bachman" ]
ICLR.cc/2014/conference
2014
review: This paper should be citing: 'Smooth Sparse Coding via Marginal Regression for Learning Sparse Representations', by Krishnakumar Balasubramanian, Kai Yu, and Guy Lebanon. In their work, they use a sparse coding subroutine effectively identical to your sparse coding method. The only difference in their encoding step is that they threshold the magnitude-sorted set of potential coefficients by cumulative L1 norm rather than cumulative L0 norm (a minor difference). They also add a regularization term to their objective designed to approximately minimize coherence of the learned dictionary, which may be worth trying as an addition to your current approach. This general class of techniques, i.e. marginal regression, is reasonably well-known and has been investigated previously as a quick-and-dirty approximation to the Lasso. For more detail, see: 'A Comparison of the Lasso and Marginal Regression', by Genovese et. al. I haven't looked at this paper in a while, but it should contribute to your theory, as they present results similar to yours, but in the context of approximating the Lasso in a linear regression setting. It might be interesting to try a 'marginalized dropout' encoding at test time, in which each coefficient is scaled by it's probability of being among the top-k coefficients when the full coefficient set is subject to, e.g., 50% dropout. This would correspond to a simple rescaling of each coefficient by its location in the magnitude-sorted list. The true top-k would still be included fully in the encoding, while coefficients outside the true top-k would quickly shrink towards 0 as you move away from the true top-k. The shrinkage factors could be pre-computed for all possible sorted list positions based on the CDF of a Binomial distribution. If 'true' sparsity is desired, the shrunken coefficients could be hard-thresholded at some small epsilon. This would 'smooth' the encoding, perhaps removing aliasing effects that may occur with a hard threshold. This would be a fairly natural encoding to use if dictionary elements were also subject to (unmarginalized) dropout at training time. The additional computational cost would be trivial, as shrinkage and thresholding would be applied simultaneously to the magnitude-sorted coefficient list with a single element-wise vector product. Paper 1: http://www.cc.gatech.edu/~lebanon/papers/ssc_icml13.pdf Paper 2: http://www.stat.cmu.edu/~jiashun/Research/Year/Marginal.pdf
QDm4QXNOsuQVE
k-Sparse Autoencoders
[ "Alireza Makhzani", "Brendan Frey" ]
Recently, it has been observed that when representations are learnt in a way that encourages sparsity, improved performance is obtained on classification tasks. These methods involve combinations of activation functions, sampling steps and different kinds of penalties. To investigate the effectiveness of sparsity by itself, we propose the k-sparse autoencoder, which is a linear model, but where in hidden layers only the k highest activities are kept. When applied to the MNIST and NORB datasets, we find that this method achieves better classification results than denoising autoencoders, networks trained with dropout, and restricted Boltzmann machines. k-sparse autoencoders are simple to train and the encoding stage is very fast, making them well-suited to large problem sizes, where conventional sparse coding algorithms cannot be applied.
[ "autoencoders", "sparsity", "representations", "way", "performance", "classification tasks", "methods", "combinations", "activation functions", "steps" ]
https://openreview.net/pdf?id=QDm4QXNOsuQVE
https://openreview.net/forum?id=QDm4QXNOsuQVE
U_SZ1ftvTIxvg
review
1,391,148,960,000
QDm4QXNOsuQVE
[ "everyone" ]
[ "Phil Bachman" ]
ICLR.cc/2014/conference
2014
review: This paper should be citing: 'Smooth Sparse Coding via Marginal Regression for Learning Sparse Representations', by Krishnakumar Balasubramanian, Kai Yu, and Guy Lebanon. In their work, they use a sparse coding subroutine effectively identical to your sparse coding method. The only difference in their encoding step is that they threshold the magnitude-sorted set of potential coefficients by cumulative L1 norm rather than cumulative L0 norm (a minor difference). They also add a regularization term to their objective designed to approximately minimize coherence of the learned dictionary, which may be worth trying as an addition to your current approach. This general class of techniques, i.e. marginal regression, is reasonably well-known and has been investigated previously as a quick-and-dirty approximation to the Lasso. For more detail, see: 'A Comparison of the Lasso and Marginal Regression', by Genovese et. al. I haven't looked at this paper in a while, but it should contribute to your theory, as they present results similar to yours, but in the context of approximating the Lasso in a linear regression setting. It might be interesting to try a 'marginalized dropout' encoding at test time, in which each coefficient is scaled by it's probability of being among the top-k coefficients when the full coefficient set is subject to, e.g., 50% dropout. This would correspond to a simple rescaling of each coefficient by its location in the magnitude-sorted list. The true top-k would still be included fully in the encoding, while coefficients outside the true top-k would quickly shrink towards 0 as you move away from the true top-k. The shrinkage factors could be pre-computed for all possible sorted list positions based on the CDF of a Binomial distribution. If 'true' sparsity is desired, the shrunken coefficients could be hard-thresholded at some small epsilon. This would 'smooth' the encoding, perhaps removing aliasing effects that may occur with a hard threshold. This would be a fairly natural encoding to use if dictionary elements were also subject to (unmarginalized) dropout at training time. The additional computational cost would be trivial, as shrinkage and thresholding would be applied simultaneously to the magnitude-sorted coefficient list with a single element-wise vector product. Paper 1: http://www.cc.gatech.edu/~lebanon/papers/ssc_icml13.pdf Paper 2: http://www.stat.cmu.edu/~jiashun/Research/Year/Marginal.pdf
QDm4QXNOsuQVE
k-Sparse Autoencoders
[ "Alireza Makhzani", "Brendan Frey" ]
Recently, it has been observed that when representations are learnt in a way that encourages sparsity, improved performance is obtained on classification tasks. These methods involve combinations of activation functions, sampling steps and different kinds of penalties. To investigate the effectiveness of sparsity by itself, we propose the k-sparse autoencoder, which is a linear model, but where in hidden layers only the k highest activities are kept. When applied to the MNIST and NORB datasets, we find that this method achieves better classification results than denoising autoencoders, networks trained with dropout, and restricted Boltzmann machines. k-sparse autoencoders are simple to train and the encoding stage is very fast, making them well-suited to large problem sizes, where conventional sparse coding algorithms cannot be applied.
[ "autoencoders", "sparsity", "representations", "way", "performance", "classification tasks", "methods", "combinations", "activation functions", "steps" ]
https://openreview.net/pdf?id=QDm4QXNOsuQVE
https://openreview.net/forum?id=QDm4QXNOsuQVE
Wsm0zCOF1isf5
comment
1,390,029,480,000
o1QJezhmJPem2
[ "everyone" ]
[ "Alireza Makhzani" ]
ICLR.cc/2014/conference
2014
reply: Thank you Markus and David for your constructive feedbacks. We read your paper and found its results in line with that of ours. In your paper, an improved and differentiable sparsity enforcing projection with respect to the Hoyer's sparseness measure has been introduced. This projection is used in a supervised online autoencoder whose cost function has an alpha parameter that controls the trade-off between unsupervised learning and supervised learning. The algorithm has been tested on MNIST and achieved good classification rate. This paper is similar to our paper in that our hard thresholding operator could be viewed as an L0 projection and we are also using unsupervised learning to pre-train our discriminative neural net. It is true that by using a more complicated thresholding operator in the sparse recovery stage (as in your paper), we can obtain a better performance on datasets such as MNIST. For example, in Donoho & Maleki's 'approximate message passing' paper, it has been shown that a variant of iterative hard thresholding algorithm that uses a complex thresholding operator derived from message passing can beat even convex relaxation methods in sparse recovery in both performance and complexity. However, as we have discussed in the paper, the main motivation of this work is to propose a fast sparse coding algorithm on GPU that could be applied on larger datasets. We have shown that only the first few iterations of IHT combined with a dictionary update stage is enough to get close to the state of the art results. IHT only requires matrix multiplication in the sparse coding stage which makes it very fast on GPUs. We have discussed how the iterations of IHT algorithm could be viewed in the context of our proposed k-sparse autoencoder and how we can use it to pre-train deep neural nets. Regarding the incoherency, Theorem 3.1 establishes a connection between the incoherency of the dictionary and the chances of finding the true support set with the encoder part of the k-sparse autoencoders. We experimentally observe that k-sparse autoencoders converge to a local minimum. At this local minimum, the autoencoder has learnt a sparse code z, that satisfies x = Wz. Also the support set of this sparse code can be estimated using supp(W.T x). Since supp(W.T x) succeeds in finding the support set, according to the Theorem 3.1, the atoms of the dictionary should be well separated from each other and the learnt dictionary must be sufficiently incoherent. Yes, if we were only considering just one single training example, as you mentioned there could be other reasons that the support set could be recovered while the dictionary is coherent. But the only way that the k-sparse autoencoder can recover the true support set for 'all training examples' is that the dictionary be sufficiently incoherent. To measure the incoherency of the dictionary, we obtained the following result (see Deterministic Compressed Sensing by Jafarpour & Calderbank). We have experimentally showed that the k-sparse autoencoder learns a dictionary for which we can actually solve sparse recovery problems for other sparse signals and not just the training points. We first learned a dictionary W of size 784*1000 on MNIST using the k-sparse autoencoder with K = 20. We then picked K elements of z uniformly at random and set them to be one and set the rest of the elements to zero. Then we computed x = Wz and tried to reconstruct the support set of z from x using the encoder of the k-sparse autoencoder. We found that the trained dictionary was able to recover the support set of the arbitrary sparse signals perfectly. This is only possible when the dictionary is sufficiently incoherent and the atoms (features) are well separated from each other. Regarding the scheduling, It means when we have 100 epochs, k follows a linear function for epochs 1 to 50, and then remains at the minimum level for epochs 51 to 100. In our experiments, increasing k always helped to avoid dead hidden units. Another option is to add a KL penalty to the cost function of the autoencoder as in Ng's sparse autoencoder paper. This KL penalty encourages each hidden unit to be active a small number of times across the training set and avoids the problem of dead hidden units. Thanks Markus and David for pointing out the typos. We will correct them in the final manuscript.
QDm4QXNOsuQVE
k-Sparse Autoencoders
[ "Alireza Makhzani", "Brendan Frey" ]
Recently, it has been observed that when representations are learnt in a way that encourages sparsity, improved performance is obtained on classification tasks. These methods involve combinations of activation functions, sampling steps and different kinds of penalties. To investigate the effectiveness of sparsity by itself, we propose the k-sparse autoencoder, which is a linear model, but where in hidden layers only the k highest activities are kept. When applied to the MNIST and NORB datasets, we find that this method achieves better classification results than denoising autoencoders, networks trained with dropout, and restricted Boltzmann machines. k-sparse autoencoders are simple to train and the encoding stage is very fast, making them well-suited to large problem sizes, where conventional sparse coding algorithms cannot be applied.
[ "autoencoders", "sparsity", "representations", "way", "performance", "classification tasks", "methods", "combinations", "activation functions", "steps" ]
https://openreview.net/pdf?id=QDm4QXNOsuQVE
https://openreview.net/forum?id=QDm4QXNOsuQVE
22GAEiwLF02Cg
review
1,391,149,020,000
QDm4QXNOsuQVE
[ "everyone" ]
[ "Phil Bachman" ]
ICLR.cc/2014/conference
2014
review: This paper should be citing: 'Smooth Sparse Coding via Marginal Regression for Learning Sparse Representations', by Krishnakumar Balasubramanian, Kai Yu, and Guy Lebanon. In their work, they use a sparse coding subroutine effectively identical to your sparse coding method. The only difference in their encoding step is that they threshold the magnitude-sorted set of potential coefficients by cumulative L1 norm rather than cumulative L0 norm (a minor difference). They also add a regularization term to their objective designed to approximately minimize coherence of the learned dictionary, which may be worth trying as an addition to your current approach. This general class of techniques, i.e. marginal regression, is reasonably well-known and has been investigated previously as a quick-and-dirty approximation to the Lasso. For more detail, see: 'A Comparison of the Lasso and Marginal Regression', by Genovese et. al. I haven't looked at this paper in a while, but it should contribute to your theory, as they present results similar to yours, but in the context of approximating the Lasso in a linear regression setting. It might be interesting to try a 'marginalized dropout' encoding at test time, in which each coefficient is scaled by it's probability of being among the top-k coefficients when the full coefficient set is subject to, e.g., 50% dropout. This would correspond to a simple rescaling of each coefficient by its location in the magnitude-sorted list. The true top-k would still be included fully in the encoding, while coefficients outside the true top-k would quickly shrink towards 0 as you move away from the true top-k. The shrinkage factors could be pre-computed for all possible sorted list positions based on the CDF of a Binomial distribution. If 'true' sparsity is desired, the shrunken coefficients could be hard-thresholded at some small epsilon. This would 'smooth' the encoding, perhaps removing aliasing effects that may occur with a hard threshold. This would be a fairly natural encoding to use if dictionary elements were also subject to (unmarginalized) dropout at training time. The additional computational cost would be trivial, as shrinkage and thresholding would be applied simultaneously to the magnitude-sorted coefficient list with a single element-wise vector product. Paper 1: http://www.cc.gatech.edu/~lebanon/papers/ssc_icml13.pdf Paper 2: http://www.stat.cmu.edu/~jiashun/Research/Year/Marginal.pdf
QDm4QXNOsuQVE
k-Sparse Autoencoders
[ "Alireza Makhzani", "Brendan Frey" ]
Recently, it has been observed that when representations are learnt in a way that encourages sparsity, improved performance is obtained on classification tasks. These methods involve combinations of activation functions, sampling steps and different kinds of penalties. To investigate the effectiveness of sparsity by itself, we propose the k-sparse autoencoder, which is a linear model, but where in hidden layers only the k highest activities are kept. When applied to the MNIST and NORB datasets, we find that this method achieves better classification results than denoising autoencoders, networks trained with dropout, and restricted Boltzmann machines. k-sparse autoencoders are simple to train and the encoding stage is very fast, making them well-suited to large problem sizes, where conventional sparse coding algorithms cannot be applied.
[ "autoencoders", "sparsity", "representations", "way", "performance", "classification tasks", "methods", "combinations", "activation functions", "steps" ]
https://openreview.net/pdf?id=QDm4QXNOsuQVE
https://openreview.net/forum?id=QDm4QXNOsuQVE
Np1RNxW_L5pw2
comment
1,389,051,900,000
Gg_aFHZkvdGNC
[ "everyone" ]
[ "Alireza Makhzani" ]
ICLR.cc/2014/conference
2014
reply: Thank you David for raising this concern. OMP combined with a dictionary update stage (as in Coates and Ng) is the conventional way of doing sparse coding and is discussed in the introduction of our paper. The problem with OMP is that it is very slow in practical situations. It uses k iterations and at each iteration, it needs to project a residual error onto the orthogonal complement of the subspace created by the span of the dictionary elements at that iteration. This is a very costly matrix operation as it requires k matrix inversions whenever we visit any training example. It works on small datasets such as MNIST but we were not able to get it working properly on larger datasets such as NORB, as it was too slow. Indeed the main motivation of proposing the k-sparse autoencoder is to address sparse coding for larger problem sizes, e.g., NORB, where conventional sparse coding approaches such as OMP are not practical. We have shown in the paper that the k-sparse autoencoder is an approximation of iterative hard thresholding (IHT). IHT is much faster than OMP, as it only needs a matrix multiplication in the sparse coding stage that can be efficiently implemented on a GPU. Also we have approximated the dictionary update stage with a single gradient descent step which makes the algorithm very fast. We have discussed in the paper that even with a very naive approximation of IHT, we achieve a fast sparse coding algorithm on GPU that performs as good as the state of the art. By tuning the number of iterations of the IHT algorithm, we can learn better dictionaries and trade off classification performance with computation time. Therefore, although both the k-sparse autoencoder and OMP-k enforce k-sparsity in the hidden representation, they are inherently different in both the dictionary learning and encoding stages. Our thresholding operator is also different from that of Coates and Ng's. The main difference is that we directly use the operator in both the training and test stages to gain speed-ups, while in the Coates and Ng's, the thresholding operator is only used at test time and training is performed using other algorithms, such as OMP-k, which are very slow. Another difference is that they use a fixed and pre-defined soft thresholding operator and do not have control over the sparsity level, while we are using a hard thresholding operator in which the threshold is adaptive and is equal to the k-th largest element of the input.
QDm4QXNOsuQVE
k-Sparse Autoencoders
[ "Alireza Makhzani", "Brendan Frey" ]
Recently, it has been observed that when representations are learnt in a way that encourages sparsity, improved performance is obtained on classification tasks. These methods involve combinations of activation functions, sampling steps and different kinds of penalties. To investigate the effectiveness of sparsity by itself, we propose the k-sparse autoencoder, which is a linear model, but where in hidden layers only the k highest activities are kept. When applied to the MNIST and NORB datasets, we find that this method achieves better classification results than denoising autoencoders, networks trained with dropout, and restricted Boltzmann machines. k-sparse autoencoders are simple to train and the encoding stage is very fast, making them well-suited to large problem sizes, where conventional sparse coding algorithms cannot be applied.
[ "autoencoders", "sparsity", "representations", "way", "performance", "classification tasks", "methods", "combinations", "activation functions", "steps" ]
https://openreview.net/pdf?id=QDm4QXNOsuQVE
https://openreview.net/forum?id=QDm4QXNOsuQVE
-1_6TtCJ3iKOz
review
1,391,149,020,000
QDm4QXNOsuQVE
[ "everyone" ]
[ "Phil Bachman" ]
ICLR.cc/2014/conference
2014
review: This paper should be citing: 'Smooth Sparse Coding via Marginal Regression for Learning Sparse Representations', by Krishnakumar Balasubramanian, Kai Yu, and Guy Lebanon. In their work, they use a sparse coding subroutine effectively identical to your sparse coding method. The only difference in their encoding step is that they threshold the magnitude-sorted set of potential coefficients by cumulative L1 norm rather than cumulative L0 norm (a minor difference). They also add a regularization term to their objective designed to approximately minimize coherence of the learned dictionary, which may be worth trying as an addition to your current approach. This general class of techniques, i.e. marginal regression, is reasonably well-known and has been investigated previously as a quick-and-dirty approximation to the Lasso. For more detail, see: 'A Comparison of the Lasso and Marginal Regression', by Genovese et. al. I haven't looked at this paper in a while, but it should contribute to your theory, as they present results similar to yours, but in the context of approximating the Lasso in a linear regression setting. It might be interesting to try a 'marginalized dropout' encoding at test time, in which each coefficient is scaled by it's probability of being among the top-k coefficients when the full coefficient set is subject to, e.g., 50% dropout. This would correspond to a simple rescaling of each coefficient by its location in the magnitude-sorted list. The true top-k would still be included fully in the encoding, while coefficients outside the true top-k would quickly shrink towards 0 as you move away from the true top-k. The shrinkage factors could be pre-computed for all possible sorted list positions based on the CDF of a Binomial distribution. If 'true' sparsity is desired, the shrunken coefficients could be hard-thresholded at some small epsilon. This would 'smooth' the encoding, perhaps removing aliasing effects that may occur with a hard threshold. This would be a fairly natural encoding to use if dictionary elements were also subject to (unmarginalized) dropout at training time. The additional computational cost would be trivial, as shrinkage and thresholding would be applied simultaneously to the magnitude-sorted coefficient list with a single element-wise vector product. Paper 1: http://www.cc.gatech.edu/~lebanon/papers/ssc_icml13.pdf Paper 2: http://www.stat.cmu.edu/~jiashun/Research/Year/Marginal.pdf
__Jk_HAdtfK5W
Learning to encode motion using spatio-temporal synchrony
[ "Kishore Reddy Konda", "Roland Memisevic", "Vincent Michalski" ]
We consider the task of learning to extract motion from videos. To this end, we show that the detection of spatial transformations can be viewed as the detection of synchrony between the image sequence and a sequence of features undergoing the motion we wish to detect. We show that learning about synchrony is possible using very fast, local learning rules, by introducing multiplicative 'gating' interactions between hidden units across frames. This makes it possible to achieve competitive performance in a wide variety of motion estimation tasks, using a small fraction of the time required to learn features, and to outperform hand-crafted spatio-temporal features by a large margin. We also show how learning about synchrony can be viewed as performing greedy parameter estimation in the well-known motion energy model.
[ "synchrony", "motion", "features", "detection", "possible", "task", "videos", "end", "spatial transformations", "image sequence" ]
https://openreview.net/pdf?id=__Jk_HAdtfK5W
https://openreview.net/forum?id=__Jk_HAdtfK5W
6m_xmeudQa47M
comment
1,392,129,720,000
nOAHIb1E0y2d-
[ "everyone" ]
[ "kishore reddy" ]
ICLR.cc/2014/conference
2014
reply: - The formula (20) looks remarkably similar the paper you reference (and that beats your performance) - sec 3.1 of 'Learning hierarchical spatio-temporal features for action recognition with independent subspace analysis'. The only difference is mainly that you use sigmoid nonlinearity and they use square root. There is an additional difference: Inside the sigmoid there is a square, whereas it is a sum over squares in Le et al. In other words, for them, learning of features is done in the presence of a pooling layer, whereas here, learning is done greedily, layer-by-layer. Running the code published by Le et al., incidentally, yields the same performance as our approach. Our method is significantly faster because it is a type of Kmeans. - 4.2 - which auto encoder you are comparing to - there is a large number of them. In particular what if you just used k-means on concatenated frames? That would also detect 'coincidence' between frames. It would also train extremely quickly (and then use smoother version during inference as in Coates et al). We now report the K-means result on concatenated frames in the newest version. It shows a significantly lower performance. As for the autoencoder, it was a contractive autoencoder, also with detailed result now in the newest version.
__Jk_HAdtfK5W
Learning to encode motion using spatio-temporal synchrony
[ "Kishore Reddy Konda", "Roland Memisevic", "Vincent Michalski" ]
We consider the task of learning to extract motion from videos. To this end, we show that the detection of spatial transformations can be viewed as the detection of synchrony between the image sequence and a sequence of features undergoing the motion we wish to detect. We show that learning about synchrony is possible using very fast, local learning rules, by introducing multiplicative 'gating' interactions between hidden units across frames. This makes it possible to achieve competitive performance in a wide variety of motion estimation tasks, using a small fraction of the time required to learn features, and to outperform hand-crafted spatio-temporal features by a large margin. We also show how learning about synchrony can be viewed as performing greedy parameter estimation in the well-known motion energy model.
[ "synchrony", "motion", "features", "detection", "possible", "task", "videos", "end", "spatial transformations", "image sequence" ]
https://openreview.net/pdf?id=__Jk_HAdtfK5W
https://openreview.net/forum?id=__Jk_HAdtfK5W
55DTZ7VOi9sAR
review
1,392,130,020,000
__Jk_HAdtfK5W
[ "everyone" ]
[ "kishore reddy" ]
ICLR.cc/2014/conference
2014
review: We thank all the reviewers for their valuable comments. We uploaded a new version of the paper with some of the suggested modifications.
__Jk_HAdtfK5W
Learning to encode motion using spatio-temporal synchrony
[ "Kishore Reddy Konda", "Roland Memisevic", "Vincent Michalski" ]
We consider the task of learning to extract motion from videos. To this end, we show that the detection of spatial transformations can be viewed as the detection of synchrony between the image sequence and a sequence of features undergoing the motion we wish to detect. We show that learning about synchrony is possible using very fast, local learning rules, by introducing multiplicative 'gating' interactions between hidden units across frames. This makes it possible to achieve competitive performance in a wide variety of motion estimation tasks, using a small fraction of the time required to learn features, and to outperform hand-crafted spatio-temporal features by a large margin. We also show how learning about synchrony can be viewed as performing greedy parameter estimation in the well-known motion energy model.
[ "synchrony", "motion", "features", "detection", "possible", "task", "videos", "end", "spatial transformations", "image sequence" ]
https://openreview.net/pdf?id=__Jk_HAdtfK5W
https://openreview.net/forum?id=__Jk_HAdtfK5W
G3BpoZVjqn3jE
review
1,390,089,480,000
__Jk_HAdtfK5W
[ "everyone" ]
[ "anonymous reviewer 21fc" ]
ICLR.cc/2014/conference
2014
title: review of Learning to encode motion using spatio-temporal synchrony review: One might think that everything has been said about motion estimation and motion energies, though refreshing ideas are always welcome even in subjects with thousands (ten of thousands?) papers. The paper overall presentation and discussion are very clear and friendly. When discussing multiplicative (2.3), isn't this saying we should be working with additive but in the log space? log space is very frequently used in image analysis. The locality property is very interesting and as the authors claim, very powerful for computation. I believe is worth investigating for other applications and models. Not all the mentioned results are the state-of-the-art for the used datasets, but this is not critical. Does it make sense to report results for optical flow standard sets? While I don't believe the paper is revolutionary, it is nice to read and has some interesting insights.
__Jk_HAdtfK5W
Learning to encode motion using spatio-temporal synchrony
[ "Kishore Reddy Konda", "Roland Memisevic", "Vincent Michalski" ]
We consider the task of learning to extract motion from videos. To this end, we show that the detection of spatial transformations can be viewed as the detection of synchrony between the image sequence and a sequence of features undergoing the motion we wish to detect. We show that learning about synchrony is possible using very fast, local learning rules, by introducing multiplicative 'gating' interactions between hidden units across frames. This makes it possible to achieve competitive performance in a wide variety of motion estimation tasks, using a small fraction of the time required to learn features, and to outperform hand-crafted spatio-temporal features by a large margin. We also show how learning about synchrony can be viewed as performing greedy parameter estimation in the well-known motion energy model.
[ "synchrony", "motion", "features", "detection", "possible", "task", "videos", "end", "spatial transformations", "image sequence" ]
https://openreview.net/pdf?id=__Jk_HAdtfK5W
https://openreview.net/forum?id=__Jk_HAdtfK5W
8W5tCE6_t08Gz
comment
1,392,129,600,000
WUNsa6OJq07iF
[ "everyone" ]
[ "kishore reddy" ]
ICLR.cc/2014/conference
2014
reply: - I would explain the resulting model as simply 'squared spatio-temporal Gabor filters followed by k-means pooling.' We also show that, unlike for still images, one does not get Gabor features, unless squaring activations are used in the hidden units during learning (which is equivalent to using linear coefficients for computing reconstructions during learning). This also explains why some variants of ICA and energy models (like ISA) can work on videos, but standard autoencoders or K-means do not. - Olshausen 2003, Learning Sparse, Overcomplete Representations of Time-Varying Natural Images We included this reference in the newest version. We also extended the introduction to discuss this and related papers (3rd and following paragraph in the introduction). - confused about some details on the model architecture used for SAE and SK-means We rewrote the paper to make this clearer. SAE is now discussed in a separate appendix, and SKmeans in the main text. -You argue for even-symmetric non-linearities, but couldn't a rectified linear unit also produce the desired effect? ; however it would not associate the contrast reversal, which may be the better inductive bias. We agree. -In the current form I remain uncertain what the benefit is of the proposed approach over the motion energy approach or ISA. K-means does pool over more than 2 features, but this is to be expected given the architecture. Can you show that 2-d subspaces are sub-optimal in terms of performance? You do show that the covAE underperforms, why? Is the Le et al. 2011 pipeline giving the major benefit? We did find that the covAE underperforms also in a simple motion direction classification task. We believe this is due to the learning criterion, which forces the model to learn invariances that may be good for reconstruction but not necessarily good for the subsequent classification task. The convolutional pipeline provides around 3% accuracy over a simple bag-of-words pipeline. In case of covAE it does not provide such a benefit. -In my opinion, the most novel contribution of the paper is the 'synchrony k-means' algorithm. As far as I am aware this is a nice extension of the fast convergence results shown by Coates et al.. This general framework appears promising for the related problems of 'relating images.' Therefore it could be the focus of the paper, with the SAE algorithm used merely as a control or motivation. We agree, see previous comment. -I am also skeptical about performance measurements of activity recognition as proxy for motion encoding. These datasets likely suffer from various confounds not related to motion encoding. Maybe a simpler test would be beneficial, scientifically? We agree that activity recognition, and a fixed pipeline to plug in learned features, may not be a perfect proxy for the quality of a motion encoding, but recognition performance is probably correlated with it. And it does nicely demonstrate possible practical benefits of this work, because our features learn much faster than existing models, and speed is increasingly important due to the sheer size of video datasets.
__Jk_HAdtfK5W
Learning to encode motion using spatio-temporal synchrony
[ "Kishore Reddy Konda", "Roland Memisevic", "Vincent Michalski" ]
We consider the task of learning to extract motion from videos. To this end, we show that the detection of spatial transformations can be viewed as the detection of synchrony between the image sequence and a sequence of features undergoing the motion we wish to detect. We show that learning about synchrony is possible using very fast, local learning rules, by introducing multiplicative 'gating' interactions between hidden units across frames. This makes it possible to achieve competitive performance in a wide variety of motion estimation tasks, using a small fraction of the time required to learn features, and to outperform hand-crafted spatio-temporal features by a large margin. We also show how learning about synchrony can be viewed as performing greedy parameter estimation in the well-known motion energy model.
[ "synchrony", "motion", "features", "detection", "possible", "task", "videos", "end", "spatial transformations", "image sequence" ]
https://openreview.net/pdf?id=__Jk_HAdtfK5W
https://openreview.net/forum?id=__Jk_HAdtfK5W
WUNsa6OJq07iF
review
1,391,111,880,000
__Jk_HAdtfK5W
[ "everyone" ]
[ "anonymous reviewer 951b" ]
ICLR.cc/2014/conference
2014
title: review of Learning to encode motion using spatio-temporal synchrony review: The paper introduces a variation of common mathematical forms for encoding motion. The basic approach is to encode first-order motion information through a multiplication of two filter outputs. This approach is closely related to the motion-energy model and the cross-correlation model. There is a lot of math leading up to a rather simple transform (after learning). I would explain the resulting model as simply 'squared spatio-temporal Gabor filters followed by k-means pooling.' The squared outputs are the components of the energy-based model (quadrature pair squared spatio-temporal Gabor filters are added in the energy based model) and k-means is used to pool first-order motion selective responses. The more general problem of motion encoding needs to address the goal of motion selectivity and form or pattern invariance/tolerance. As the authors point out, their squared outputs do not solve the motion encoding problem and the following pooling layer is intended to provide the solution. The simplest next step would be to combine (add) two outputs to increase form invariance (this is the motion energy model). Slightly more complex would be to group multiple squared outputs (these are the ISA models applied to spatio-temporal input). The authors propose to use k-means for the pooling operation. The results are validated on common action recognition computer vision databases. I do not find the results surprising. Given the very close similarities of the proposed algorithm to the Le et al. ISA algorithm it is not at all surprising that the results are nearly identical. The training time improvements are also not surprising given the results from Coates et al. 2011. This paper should probably be cited in regards to the learned filters: Olshausen 2003, Learning Sparse, Overcomplete Representations of Time-Varying Natural Images There are other results in the ICA community that should be cited, as they give similar results. I am confused about some details on the model architecture used for SAE and SK-means. The beginning of section 4 appears to only describe the SAE architecture. What about the Sk-means architecture? and how does the k-means in section 3.4 relate to the models evaluated in section 4? My apologies if this is discussed somewhere in the text, I just can't find it in the places I expect. Here are some suggestions/comments: The exposition is useful for bringing together many of motion models in the literature. You argue for even-symmetric non-linearities, but couldn't a rectified linear unit also produce the desired effect? ; however it would not associate the contrast reversal, which may be the better inductive bias. In the current form I remain uncertain what the benefit is of the proposed approach over the motion energy approach or ISA. K-means does pool over more than 2 features, but this is to be expected given the architecture. Can you show that 2-d subspaces are sub-optimal in terms of performance? You do show that the covAE underperforms, why? Is the Le et al. 2011 pipeline giving the major benefit? In my opinion, the most novel contribution of the paper is the 'synchrony k-means' algorithm. As far as I am aware this is a nice extension of the fast convergence results shown by Coates et al.. This general framework appears promising for the related problems of 'relating images.' Therefore it could be the focus of the paper, with the SAE algorithm used merely as a control or motivation. I am also skeptical about performance measurements of activity recognition as proxy for motion encoding. These datasets likely suffer from various confounds not related to motion encoding. Maybe a simpler test would be beneficial, scientifically?
__Jk_HAdtfK5W
Learning to encode motion using spatio-temporal synchrony
[ "Kishore Reddy Konda", "Roland Memisevic", "Vincent Michalski" ]
We consider the task of learning to extract motion from videos. To this end, we show that the detection of spatial transformations can be viewed as the detection of synchrony between the image sequence and a sequence of features undergoing the motion we wish to detect. We show that learning about synchrony is possible using very fast, local learning rules, by introducing multiplicative 'gating' interactions between hidden units across frames. This makes it possible to achieve competitive performance in a wide variety of motion estimation tasks, using a small fraction of the time required to learn features, and to outperform hand-crafted spatio-temporal features by a large margin. We also show how learning about synchrony can be viewed as performing greedy parameter estimation in the well-known motion energy model.
[ "synchrony", "motion", "features", "detection", "possible", "task", "videos", "end", "spatial transformations", "image sequence" ]
https://openreview.net/pdf?id=__Jk_HAdtfK5W
https://openreview.net/forum?id=__Jk_HAdtfK5W
nOAHIb1E0y2d-
review
1,391,867,580,000
__Jk_HAdtfK5W
[ "everyone" ]
[ "anonymous reviewer 4272" ]
ICLR.cc/2014/conference
2014
title: review of Learning to encode motion using spatio-temporal synchrony review: The paper introduces an algorithm to learn to detect motion from video data using coincidence of features between consecutive frames. While there are novel elements, the basic ideas are similar to papers of the same authors and the algorithm by other authors referenced in this paper. Details: - The formula (20) looks remarkably similar the paper you reference (and that beats your performance) - sec 3.1 of 'Learning hierarchical spatio-temporal features for action recognition with independent subspace analysis'. The only difference is mainly that you use sigmoid nonlinearity and they use square root. - 4.2 - which auto encoder you are comparing to - there is a large number of them. In particular what if you just used k-means on concatenated frames? That would also detect 'coincidence' between frames. It would also train extremely quickly (and then use smoother version during inference as in Coates et al). - In table 1 - is this on sequence of frames or pairs? (3.1,3.2 vs 3.3?)
__Jk_HAdtfK5W
Learning to encode motion using spatio-temporal synchrony
[ "Kishore Reddy Konda", "Roland Memisevic", "Vincent Michalski" ]
We consider the task of learning to extract motion from videos. To this end, we show that the detection of spatial transformations can be viewed as the detection of synchrony between the image sequence and a sequence of features undergoing the motion we wish to detect. We show that learning about synchrony is possible using very fast, local learning rules, by introducing multiplicative 'gating' interactions between hidden units across frames. This makes it possible to achieve competitive performance in a wide variety of motion estimation tasks, using a small fraction of the time required to learn features, and to outperform hand-crafted spatio-temporal features by a large margin. We also show how learning about synchrony can be viewed as performing greedy parameter estimation in the well-known motion energy model.
[ "synchrony", "motion", "features", "detection", "possible", "task", "videos", "end", "spatial transformations", "image sequence" ]
https://openreview.net/pdf?id=__Jk_HAdtfK5W
https://openreview.net/forum?id=__Jk_HAdtfK5W
EEWiEwC6o3TY4
comment
1,392,129,120,000
G3BpoZVjqn3jE
[ "everyone" ]
[ "kishore reddy" ]
ICLR.cc/2014/conference
2014
reply: -When discussing multiplicative (2.3), isn't this saying we should be working with additive but in the log space? log space is very frequently used in image analysis. Yes, that's true. Unfortunately, for learning, one would need to undo any log-transforms to compute reconstructions, so this relationship does not immediately translate into a practical algorithm. -Does it make sense to report results for optical flow standard sets? Yes, it may make sense, though it would require a clean-up stage (like an MRF) to be competitive in benchmarks. The model is more general than optical flow, in that it allows pixels to have multiple target positions (like in expansions or transparency, for example). Though is likely to hurt not help in an optical flow benchmark.
UYmwU4C1wZi16
Feature Graph Architectures
[ "Richard Davis", "Sanjay Chawla", "Philip Leong" ]
In this article we propose feature graph architectures (FGA), which are deep learning systems employing a structured initialisation and training method based on a feature graph which facilitates improved generalisation performance compared with a standard shallow architecture. The goal is to explore alternative perspectives on the problem of deep network training. We evaluate FGA performance for deep SVMs on some experimental datasets, and show how generalisation and stability results may be derived for these models. We describe the effect of permutations on the model accuracy, and give a criterion for the optimal permutation in terms of feature correlations. The experimental results show that the algorithm produces robust and significant test set improvements over a standard shallow SVM training method for a range of datasets. These gains are achieved with a moderate increase in time complexity.
[ "feature graph", "feature graph architectures", "article", "fga", "deep", "systems", "structured initialisation", "training", "improved generalisation performance", "standard shallow architecture" ]
https://openreview.net/pdf?id=UYmwU4C1wZi16
https://openreview.net/forum?id=UYmwU4C1wZi16
aK7XSqgON9aOi
review
1,391,403,660,000
UYmwU4C1wZi16
[ "everyone" ]
[ "anonymous reviewer bbb2" ]
ICLR.cc/2014/conference
2014
title: review of Feature Graph Architectures review: * A brief summary of the paper's contributions, in the context of prior work. Papers suggests to stack multiple machine learning modules on top of each other. Applies it to SVMs. * An assessment of novelty and quality. Not novel, quality is low. * A list of pros and cons (reasons to accept/reject). pros : Exploration of non-standard deep architectures. Good direction of research. cons : Paper suggests randomly establish group of features to find correlated groups. This task might be extremely expensive and infeasible. Very poor experiments. One experiment on synthetic data, which is trivial to learn (sum xi) ^ 2, and others on unknown datasets. It is unclear where is non-linearity coming from. If this are just SVMs stacked on top of each other, and there is no non-linearity in between ? Then entire procedure is just a linear classifier with regularization. Paper doesn’t state what is optimization objective of the entire system. It just brings an algorithm. Dimensionality of data is extremely small ~25 dims.
UYmwU4C1wZi16
Feature Graph Architectures
[ "Richard Davis", "Sanjay Chawla", "Philip Leong" ]
In this article we propose feature graph architectures (FGA), which are deep learning systems employing a structured initialisation and training method based on a feature graph which facilitates improved generalisation performance compared with a standard shallow architecture. The goal is to explore alternative perspectives on the problem of deep network training. We evaluate FGA performance for deep SVMs on some experimental datasets, and show how generalisation and stability results may be derived for these models. We describe the effect of permutations on the model accuracy, and give a criterion for the optimal permutation in terms of feature correlations. The experimental results show that the algorithm produces robust and significant test set improvements over a standard shallow SVM training method for a range of datasets. These gains are achieved with a moderate increase in time complexity.
[ "feature graph", "feature graph architectures", "article", "fga", "deep", "systems", "structured initialisation", "training", "improved generalisation performance", "standard shallow architecture" ]
https://openreview.net/pdf?id=UYmwU4C1wZi16
https://openreview.net/forum?id=UYmwU4C1wZi16
gWPx76RhvA7ue
review
1,391,999,400,000
UYmwU4C1wZi16
[ "everyone" ]
[ "anonymous reviewer 1e36" ]
ICLR.cc/2014/conference
2014
title: review of Feature Graph Architectures review: The authors propose a hierarchical SVM approach for learning. The method uses the prediction of lower layer SVMs as input to the higher-layer SVMs. Adaptive methods are proposed where lower nodes are updated incrementally in order to improve the overall accuracy. I could not find what the proposed algorithm exactly does and what is the exact function implemented by the feature graph architecture. I assume that the authors use linear SVMs for regression. Then, should not a combination of these linear SVMs also be linear? Or is a nonlinear kernel being used? The overall lack of specification of the proposed approach, makes it difficult to compare with existing alternatives (hierarchical kernels, boosting, neural networks). The paper is rich content-wise, including generalization bounds and stability analysis. Authors derive a generalization bound for the feature graph architecture, showing that surplus error grows linearly with the number of modified nodes in the graph. The generalization bound does not seem to be very tight, as authors show empirically that generalization is on par with basic SVMs. The authors propose maximizing feature correlation in the first layer as a heuristic to construct the feature hierarchy. This is generally a good idea, but I am wondering whether having only one output per group of correlated features is sufficient in order to learn complex models.
UYmwU4C1wZi16
Feature Graph Architectures
[ "Richard Davis", "Sanjay Chawla", "Philip Leong" ]
In this article we propose feature graph architectures (FGA), which are deep learning systems employing a structured initialisation and training method based on a feature graph which facilitates improved generalisation performance compared with a standard shallow architecture. The goal is to explore alternative perspectives on the problem of deep network training. We evaluate FGA performance for deep SVMs on some experimental datasets, and show how generalisation and stability results may be derived for these models. We describe the effect of permutations on the model accuracy, and give a criterion for the optimal permutation in terms of feature correlations. The experimental results show that the algorithm produces robust and significant test set improvements over a standard shallow SVM training method for a range of datasets. These gains are achieved with a moderate increase in time complexity.
[ "feature graph", "feature graph architectures", "article", "fga", "deep", "systems", "structured initialisation", "training", "improved generalisation performance", "standard shallow architecture" ]
https://openreview.net/pdf?id=UYmwU4C1wZi16
https://openreview.net/forum?id=UYmwU4C1wZi16
2bIiNpLOMP2fT
review
1,391,832,240,000
UYmwU4C1wZi16
[ "everyone" ]
[ "Richard Davis" ]
ICLR.cc/2014/conference
2014
review: A few comments in response to the above points: 1. We think the novel aspects are a) the use of a deep objective function, in which each node is separately optimized b) the additional objective that the global training error must improve to retain node modifications, c) the initialization of the deep SVM to the coefficients of a shallow SVM which guarantees the starting point is a good one, and d) for deep learning in regression, feature learning at a node level can be effectively done by training the node to the target. We are not claiming that the novelty lies in stacking machine learning modules, which is a commonplace technique. Rather, the main idea is that the deep objective function can produce significantly better results even with a deep SVM using linear kernels (which as you correctly point out is just a linear function), which we feel is an interesting result. 2. Learning a non-linear function such as (sum x_i)^2 using linear building blocks is non-trivial, and the method significantly outperformed the other competing methods, some of which were non-linear. We tested other functions with more terms and found similar results. 3. We plan to test the method on a much wider range of datasets but the ones we tested are regression datasets from a well-known repository, http://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/. We used datasets of dimension ~25 to show that the technique can improve over a standard SVM even for datasets of relatively small dimension. 5. The method did show some benefits on a few classification tests but we decided to focus on regression first. 6. Finding correlated groups of features can be done quite easily using pairwise correlation tests. Its not necessary to find the optimal permutation, significant gains over the shallow SVM can be achieved even with a fairly good permutation. 7. overline denotes a mean over the node outputs. Each SVM is trained using the output of the previous layer, scaling the target so that it has the correct range using the output mean from the current node. 8. Adding normalized errors would help with the presentation and can be done easily. 9. We achieved significant gains in out-sample error performance using the technique. If you have time we would be very grateful to hear your thoughts on these areas.
UYmwU4C1wZi16
Feature Graph Architectures
[ "Richard Davis", "Sanjay Chawla", "Philip Leong" ]
In this article we propose feature graph architectures (FGA), which are deep learning systems employing a structured initialisation and training method based on a feature graph which facilitates improved generalisation performance compared with a standard shallow architecture. The goal is to explore alternative perspectives on the problem of deep network training. We evaluate FGA performance for deep SVMs on some experimental datasets, and show how generalisation and stability results may be derived for these models. We describe the effect of permutations on the model accuracy, and give a criterion for the optimal permutation in terms of feature correlations. The experimental results show that the algorithm produces robust and significant test set improvements over a standard shallow SVM training method for a range of datasets. These gains are achieved with a moderate increase in time complexity.
[ "feature graph", "feature graph architectures", "article", "fga", "deep", "systems", "structured initialisation", "training", "improved generalisation performance", "standard shallow architecture" ]
https://openreview.net/pdf?id=UYmwU4C1wZi16
https://openreview.net/forum?id=UYmwU4C1wZi16
v23y6jYd8a2B5
review
1,392,121,260,000
UYmwU4C1wZi16
[ "everyone" ]
[ "Richard Davis" ]
ICLR.cc/2014/conference
2014
review: In response to the above comments: 1. The main idea is a mechanism which forms a hierarchy of new features which are decorrelated successively from one layer to the next. This mechanism is the reason why the deep architecture is able to decorrelate features better than using PCA in combination with a simple SVM. We thought the novel aspects were the following: a) the use of a deep objective function, in which each node is separately optimized b) the additional objective that the global training error must improve to retain node modifications, c) the initialization of the deep SVM to the coefficients of a shallow SVM which guarantees the starting point is a good one, and d) for deep learning in regression, feature learning at a node level can be effectively done by training the node to the target. We are not claiming that the novelty lies in stacking machine learning modules, which is a commonplace technique. Rather, the multi-layer objective function can produce significantly better results even using layers of linear SVMs (which overall is just a linear function), which we thought was an interesting result. 2. We tested regression datasets from a well-known repository, http://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/. 3. Learning a non-linear function such as (sum x_i)^2 using linear building blocks is non-trivial, and the method significantly outperformed the other competing methods, some of which were non-linear. We tested other functions with more terms and found similar results. 4. We used datasets of dimension ~25 to show that the technique can improve over a standard SVM even for datasets of relatively small dimension. 5. The method did show some benefits on a few classification tests but we decided to focus on regression first. 6. Finding correlated groups of features can be done quite easily using pairwise correlation tests. Its not necessary to find the optimal permutation, significant gains over the shallow SVM can be achieved even with a fairly good permutation. 7. overline denotes a mean over the node outputs. Each SVM is trained using the output of the previous layer, scaling the target so that it has the correct range using the output mean from the current node. 8. Adding normalized errors would help with the presentation and can be done easily. 9. We achieved significant gains in out-sample error performance using the technique. 10. The gains from the method over a standard shallow SVM are dataset-dependent. If the data have completely uncorrelated features, the method will not improve over the shallow SVM. However if some features are correlated, as is usually the case, the method is able to improve over the shallow SVM using the architecture. 11. The method gives a way to move from a shallow to a deep model incrementally so that at each stage the model improves. It has multiple outputs per group of correlated features, since there are multiple layers in the model. This multilayer structure allows for more complex models to be built.
UYmwU4C1wZi16
Feature Graph Architectures
[ "Richard Davis", "Sanjay Chawla", "Philip Leong" ]
In this article we propose feature graph architectures (FGA), which are deep learning systems employing a structured initialisation and training method based on a feature graph which facilitates improved generalisation performance compared with a standard shallow architecture. The goal is to explore alternative perspectives on the problem of deep network training. We evaluate FGA performance for deep SVMs on some experimental datasets, and show how generalisation and stability results may be derived for these models. We describe the effect of permutations on the model accuracy, and give a criterion for the optimal permutation in terms of feature correlations. The experimental results show that the algorithm produces robust and significant test set improvements over a standard shallow SVM training method for a range of datasets. These gains are achieved with a moderate increase in time complexity.
[ "feature graph", "feature graph architectures", "article", "fga", "deep", "systems", "structured initialisation", "training", "improved generalisation performance", "standard shallow architecture" ]
https://openreview.net/pdf?id=UYmwU4C1wZi16
https://openreview.net/forum?id=UYmwU4C1wZi16
DeInCVqfH-CS9
review
1,391,703,720,000
UYmwU4C1wZi16
[ "everyone" ]
[ "anonymous reviewer 07bc" ]
ICLR.cc/2014/conference
2014
title: review of Feature Graph Architectures review: This paper presents a tree-structured architecture whose leafs are SVMs on subsets of attributes and whose internal nodes are SVM taking as input predictions computed by the children. The generality of the method is unclear: you seem to implicitly assume regression problems but this is never explicitly mentioned. Could the method work for classification? The significance seems modest to me. First, it is unclear in what sense the algorithm performs feature learning as the intermediate layers contain in facts predictions. Additionally, all the experiments use a linear kernel and thus from what I understand from Section 3 the tree in Figure 1 computes a linear function of its input. Clearly I must be missing something otherwise this would not be a deep learning system at all. But the presentation should be improved to clarify this. The notation is also confusing, for example does overline{y} denote a mean? Over what quantities exactly (targets or predictions)? Pseudo-code 2 seem to contradict Figure 1 as each SVM is trained on inputs x (again probably a notational issue). The experiments are only preliminary and based on small data sets. The reported R and hat{R} seem to be unnormalized, why?
UYmwU4C1wZi16
Feature Graph Architectures
[ "Richard Davis", "Sanjay Chawla", "Philip Leong" ]
In this article we propose feature graph architectures (FGA), which are deep learning systems employing a structured initialisation and training method based on a feature graph which facilitates improved generalisation performance compared with a standard shallow architecture. The goal is to explore alternative perspectives on the problem of deep network training. We evaluate FGA performance for deep SVMs on some experimental datasets, and show how generalisation and stability results may be derived for these models. We describe the effect of permutations on the model accuracy, and give a criterion for the optimal permutation in terms of feature correlations. The experimental results show that the algorithm produces robust and significant test set improvements over a standard shallow SVM training method for a range of datasets. These gains are achieved with a moderate increase in time complexity.
[ "feature graph", "feature graph architectures", "article", "fga", "deep", "systems", "structured initialisation", "training", "improved generalisation performance", "standard shallow architecture" ]
https://openreview.net/pdf?id=UYmwU4C1wZi16
https://openreview.net/forum?id=UYmwU4C1wZi16
stQhRcrjlysjD
review
1,391,835,720,000
UYmwU4C1wZi16
[ "everyone" ]
[ "Richard Davis" ]
ICLR.cc/2014/conference
2014
review: A few comments in response to the above points: 1. We think the novel aspects are a) the use of a deep objective function, in which each node is separately optimized b) the additional objective that the global training error must improve to retain node modifications, c) the initialization of the deep SVM to the coefficients of a shallow SVM which guarantees the starting point is a good one, and d) for deep learning in regression, feature learning at a node level can be effectively done by training the node to the target. We are not claiming that the novelty lies in stacking machine learning modules, which is a commonplace technique. Rather, the main idea is that the deep objective function can produce significantly better results even with a deep SVM using linear kernels (which as you correctly point out is just a linear function), which we feel is an interesting result. 2. Learning a non-linear function such as (sum x_i)^2 using linear building blocks is non-trivial, and the method significantly outperformed the other competing methods, some of which were non-linear. We tested other functions with more terms and found similar results. 3. We plan to test the method on a much wider range of datasets but the ones we tested are regression datasets from a well-known repository, http://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/. 4. We used datasets of dimension ~25 to show that the technique can improve over a standard SVM even for datasets of relatively small dimension. 5. The method did show some benefits on a few classification tests but we decided to focus on regression first. 6. Finding correlated groups of features can be done quite easily using pairwise correlation tests. Its not necessary to find the optimal permutation, significant gains over the shallow SVM can be achieved even with a fairly good permutation. 7. overline denotes a mean over the node outputs. Each SVM is trained using the output of the previous layer, scaling the target so that it has the correct range using the output mean from the current node. 8. Adding normalized errors would help with the presentation and can be done easily. 9. We achieved significant gains in out-sample error performance using the technique. If you have time we would be very grateful to hear your thoughts on these areas.
ylE6yojDR5yqX
Network In Network
[ "Min Lin", "Qiang Chen", "Shuicheng Yan" ]
We propose a novel network structure called 'Network In Network' (NIN) to enhance the model discriminability for local receptive fields. The conventional convolutional layer uses linear filters followed by a nonlinear activation function to scan the input. Instead, we build micro neural networks with more complex structures to handle the variance of the local receptive fields. We instantiate the micro neural network with a nonlinear multiple layer structure which is a potent function approximator. The feature maps are obtained by sliding the micro networks over the input in a similar manner of CNN and then fed into the next layer. The deep NIN is thus implemented as stacking of multiple sliding micro neural networks. With the enhanced local modeling via micro network, we are able to utilize global average pooling over feature maps in the classification layer, which is more interpretable and less prone to overfitting than traditional fully connected layers. We demonstrated state-of-the-art classification performances with NIN on CIFAR-10, CIFAR-100 and SVHN datasets.
[ "network", "nin", "local receptive fields", "input", "micro neural networks", "feature maps", "network network", "novel network structure", "model discriminability", "conventional convolutional layer" ]
https://openreview.net/pdf?id=ylE6yojDR5yqX
https://openreview.net/forum?id=ylE6yojDR5yqX
uR-xqIMBxdyAb
comment
1,392,668,040,000
0Xn3MvOjNPuwE
[ "everyone" ]
[ "Min Lin" ]
ICLR.cc/2014/conference
2014
reply: We fully agree that NIN should be tested on larger images such as imagenet. We've got reasonable preliminary results on imagenet, but since the performance of maxout on imagenet is unknown, we didn't include it in the paper. As is mentioned in our reply to Anonymous 5205, we think how dropout should be applied on a model is not yet fully understood; it makes another story itself.
ylE6yojDR5yqX
Network In Network
[ "Min Lin", "Qiang Chen", "Shuicheng Yan" ]
We propose a novel network structure called 'Network In Network' (NIN) to enhance the model discriminability for local receptive fields. The conventional convolutional layer uses linear filters followed by a nonlinear activation function to scan the input. Instead, we build micro neural networks with more complex structures to handle the variance of the local receptive fields. We instantiate the micro neural network with a nonlinear multiple layer structure which is a potent function approximator. The feature maps are obtained by sliding the micro networks over the input in a similar manner of CNN and then fed into the next layer. The deep NIN is thus implemented as stacking of multiple sliding micro neural networks. With the enhanced local modeling via micro network, we are able to utilize global average pooling over feature maps in the classification layer, which is more interpretable and less prone to overfitting than traditional fully connected layers. We demonstrated state-of-the-art classification performances with NIN on CIFAR-10, CIFAR-100 and SVHN datasets.
[ "network", "nin", "local receptive fields", "input", "micro neural networks", "feature maps", "network network", "novel network structure", "model discriminability", "conventional convolutional layer" ]
https://openreview.net/pdf?id=ylE6yojDR5yqX
https://openreview.net/forum?id=ylE6yojDR5yqX
mma8mUbFZTmHO
review
1,392,859,320,000
ylE6yojDR5yqX
[ "everyone" ]
[ "anonymous reviewer bae9" ]
ICLR.cc/2014/conference
2014
title: review of Network In Network review: Authors propose the following modification to standard architecture: Replace: convolution -> relu with convolution -> relu -> convolution (1x1 filter) -> relu Additionally, instead of using fully connected layers, the depth of the last conv layer is the same as number of classes, which they average over x,y position. This generates a vector of per-class scores. There's a number of internal inconsistencies and omitted details which make reproducing the results impossible. Those issues must be fixed to be considered for acceptance. They do bring up one intriguing idea which gives a novel approach to localization. - Authors would be better off using standard terminology, like I did above, that makes reading the paper easier. - Some space is unnecessarily taken up by issues that are irrelevant/speculative, like discussing that this architecture allows for feature filters that are 'universal function approximators.' Do we have any evidence this is actually needed for performance? - In section 3.2 they say that last averaged layer is fed into softmax, but this contradicts Figure 4 where it seems that last layer features actually correspond to classes, and no softmax is needed. I assumed the later was the intention. - Following sentence is unclear, consider expanding or removing: 'A vectorized view of the global average pooling is that the output of the last mlpconv layer is forced into orthogonal subspaces for different categories of inputs' - Most serious shortcoming of this paper is lack of detailed explanation of architecture. All I have to go on is the picture in Figure 2, which looks like 3 spatial pooling layers and 6 convolutional layers. Authors need to provide following information for each layer -- filter size/pooling size/stride/number of features. Ideally it would be in a succinct format like Table 2 of 'OverFeat' paper (1312.6229). We have implemented NIN idea on our own network we used for SVHN and got worse results. Since detailed architecture spec is missing, I can't tell if it's the problem of the idea or the particulars of the network we used. One intriguing/promising idea they bring up is the idea of using averaging instead of fully connected layers. I expected this to allow one to be able to localize the object by looking at the outputs of the last conv layer before averaging, which indeed seems like the case from Figure 4.
ylE6yojDR5yqX
Network In Network
[ "Min Lin", "Qiang Chen", "Shuicheng Yan" ]
We propose a novel network structure called 'Network In Network' (NIN) to enhance the model discriminability for local receptive fields. The conventional convolutional layer uses linear filters followed by a nonlinear activation function to scan the input. Instead, we build micro neural networks with more complex structures to handle the variance of the local receptive fields. We instantiate the micro neural network with a nonlinear multiple layer structure which is a potent function approximator. The feature maps are obtained by sliding the micro networks over the input in a similar manner of CNN and then fed into the next layer. The deep NIN is thus implemented as stacking of multiple sliding micro neural networks. With the enhanced local modeling via micro network, we are able to utilize global average pooling over feature maps in the classification layer, which is more interpretable and less prone to overfitting than traditional fully connected layers. We demonstrated state-of-the-art classification performances with NIN on CIFAR-10, CIFAR-100 and SVHN datasets.
[ "network", "nin", "local receptive fields", "input", "micro neural networks", "feature maps", "network network", "novel network structure", "model discriminability", "conventional convolutional layer" ]
https://openreview.net/pdf?id=ylE6yojDR5yqX
https://openreview.net/forum?id=ylE6yojDR5yqX
qq53tjQ7E2l0e
comment
1,397,728,440,000
jsD0sli7uUsvW
[ "everyone" ]
[ "Min Lin" ]
ICLR.cc/2014/conference
2014
reply: I'm sorry that I missed this comment, but fortunately I updated on March 4th. Tmr I'll leave Banff, really had a good time here enjoying the talks.
ylE6yojDR5yqX
Network In Network
[ "Min Lin", "Qiang Chen", "Shuicheng Yan" ]
We propose a novel network structure called 'Network In Network' (NIN) to enhance the model discriminability for local receptive fields. The conventional convolutional layer uses linear filters followed by a nonlinear activation function to scan the input. Instead, we build micro neural networks with more complex structures to handle the variance of the local receptive fields. We instantiate the micro neural network with a nonlinear multiple layer structure which is a potent function approximator. The feature maps are obtained by sliding the micro networks over the input in a similar manner of CNN and then fed into the next layer. The deep NIN is thus implemented as stacking of multiple sliding micro neural networks. With the enhanced local modeling via micro network, we are able to utilize global average pooling over feature maps in the classification layer, which is more interpretable and less prone to overfitting than traditional fully connected layers. We demonstrated state-of-the-art classification performances with NIN on CIFAR-10, CIFAR-100 and SVHN datasets.
[ "network", "nin", "local receptive fields", "input", "micro neural networks", "feature maps", "network network", "novel network structure", "model discriminability", "conventional convolutional layer" ]
https://openreview.net/pdf?id=ylE6yojDR5yqX
https://openreview.net/forum?id=ylE6yojDR5yqX
C2LaC6U71vc7A
comment
1,392,668,160,000
-9Bh-f15XcPtR
[ "everyone" ]
[ "Min Lin" ]
ICLR.cc/2014/conference
2014
reply: 1. Thanks for the information, we'll cite those papers in the coming version. 2. The hyper-parameters of the models are in our supplementary material and will soon be online. 3. NIN has a smaller number of parameters than CNN. However, it has lots of nodes thus lots of computation. But since the nodes in NIN are fully parallel, it is not a problem if we have lots of computing nodes just as human brain does.
ylE6yojDR5yqX
Network In Network
[ "Min Lin", "Qiang Chen", "Shuicheng Yan" ]
We propose a novel network structure called 'Network In Network' (NIN) to enhance the model discriminability for local receptive fields. The conventional convolutional layer uses linear filters followed by a nonlinear activation function to scan the input. Instead, we build micro neural networks with more complex structures to handle the variance of the local receptive fields. We instantiate the micro neural network with a nonlinear multiple layer structure which is a potent function approximator. The feature maps are obtained by sliding the micro networks over the input in a similar manner of CNN and then fed into the next layer. The deep NIN is thus implemented as stacking of multiple sliding micro neural networks. With the enhanced local modeling via micro network, we are able to utilize global average pooling over feature maps in the classification layer, which is more interpretable and less prone to overfitting than traditional fully connected layers. We demonstrated state-of-the-art classification performances with NIN on CIFAR-10, CIFAR-100 and SVHN datasets.
[ "network", "nin", "local receptive fields", "input", "micro neural networks", "feature maps", "network network", "novel network structure", "model discriminability", "conventional convolutional layer" ]
https://openreview.net/pdf?id=ylE6yojDR5yqX
https://openreview.net/forum?id=ylE6yojDR5yqX
of5C4EXSDnUSQ
review
1,391,225,760,000
ylE6yojDR5yqX
[ "everyone" ]
[ "anonymous reviewer 5205" ]
ICLR.cc/2014/conference
2014
title: review of Network In Network review: Summary of contributions: Proposes a new activation function for backprop nets. Advocates using global mean pooling instead of densely connected layers at the output of convolutional nets. Novelty: moderate Quality: moderate Pros: -Very impressive results on CIFAR-10 and CIFAR-100 -Acceptable results on SVHN and MNIST -Experiments distinguish between performance improvements due to NIN structure and performance improvements due to global average pooling Cons: -Explanation of why NIN works well doesn’t make a lot of sense I suspect that NIN’s performance has more to do with the way you apply dropout to the model, rather than the explanations you give in the paper. I elaborate more below in the detailed comments. Did you ever try NIN without dropout? Maxout without dropout generally does not work all that well, except in cases where each maxout unit has few filters, or the dataset size is very large. I suspect your NIN units don’t work well without dropout either, unless the micro-net is very small or you have a lot of data. I find it very weird that you don’t explore how well NIN works without dropout, and that your explanation of NIN’s performance doesn’t involve dropout at all. This paper has strong results but I think a lot of the presentation is misleading. It should be published after being edited to take out some of the less certain stuff from the explanations. It could be a really great paper if you had a better story for why NIN works well, including experiments to back up this story. I suspect the story you have now is wrong though, and I suspect the correct story involves the interaction between NIN and dropout. I’ve hear Geoff Hinton proposed using some kind of unit similar to this during a talk at the CIFAR summer school this year. I’ll ask one of the summer school students to comment on this paper. I don’t think this subtracts from your originality but it might be worth acknowledging his talk, depending on what the summer school student says. Detailed comments: Abstract: I don’t understand what it means “to enhance the model discriminability for local receptive fields.” Introduction Paragraph 1: I don’t think we can confidently say that convolutional net features are generally related to binary discrimination of whether a specific feature is present, or that they are related to probabilities. For example, some of them might be related to measurements (“how red is this patch?” rather than “what is the probability that this patch is red?”) In general, our knowledge of what features are doing is fairly primitive, informal, and ad hoc. Note that the other ICLR submission “Intriguing Properties of Neural Networks” has some strong arguments against the idea of looking at the meaning of individual features in isolutian, or interpreting them as probabilistic detectors. Basically I think you could describe conv nets in the intro without committing to these less well-established ideas about how conv nets work. Paragraph 2: I understand that many interesting features can’t be detected by a GLM. But why does the first layer of the global architecture need to be a nonlinear feature detector? Your NIN architecture still is built out of GLM primitives. It seems like it’s a bit arbitrary which things you say can be linear versus non-linear. i.e., why does it matter that you group all of the functionality of the micro-networks and say that together those are non-linear? Couldn’t we just group the first two layers of a standard deep network and say they form a non-linear layer? Can’t we derive a NIN layer just by restricting the connective of multiple layers of a regular network in the right way? Paragraph 3: Why call it an mlpconv layer? Why not call it a NIN layer for consistency with the title of the paper? Last paragraph: why average pooling? Doesn’t it get hard for this to have a high confidence output if the spatial extent of the layer gets large? Section 2: Convolutional Neural Networks eqn 1: use ext{max} so that the word “max” doesn’t appear in italics. italics are for variable names. Rest of the section: I don’t really buy your argument that people use overcompleteness to avoid the limitations of linear feature detectors. I’d say instead they use multiple layers of features. When you use two layers of any kind of MLP, the second layer can include / exclude any kind of set, regardless of whether the MLP is using sigmoid or maxout units, so I’m not sure why it matters that the first layer can only include / exclude linear half-spaces for sigmoid units and can only exclude convex sets for maxout units. Regarding maxout: I think the argument here could use a little bit more detail / precision. I think what you’re saying is that if you divide input space into an included set and an excluded set by comparing the value of a single unit against some threshold t, then traditional GLM feature detectors can only divide the input into two half-spaces with a linear boundary, while maxout can divide the input space into a convex set and its complement. Your presentation is a little weird though because it makes it sound like maxout units are active (have value > threshold) within a convex region, when in fact the opposite is true. Maxout units are active *outside* a convex region. It also doesn’t make a lot of sense to refer to “separating hyperplanes” anymore when you’re talking about this kind of convex region discrimination. Section 3.1 Par 1: I’d argue that an RBF network is just an MLP with a specific kind of unit. Equation 2: again, “max” should not be in italics Section 4.1 Let me be sure I understand how you’re applying dropout. You drop the output of each micro-MLP, but you don’t drop the hidden units within the micro-MLP, right? I bet this is what leads to your performance improvement: you’ve made the unit of dropping have higher capacity. The way you group things to be dropped for the droput algorithm actually has a functional consequence. The way you group things when looking for linear versus non-linear feature detectors is relatively arbitrary. So I don’t really buy your story in sections 1-3 about why NIN performs better, but I bet the way you use dropout could explain why it works so well. Section 4.2 These results are very impressive! While reading this section I wanted to know how much of the improvements were due to global averaging pooling versus NIN. I see you’ve done those experiemnts later in section 4.6. I’d suggest bringing Table 5 into this section so all the CIFAR-10 experiments are together and readers won’t think of this objection without knowing you’ve addressed it. Section 4.3 Convolutional maxout is actually not the previous state of the art for this dataset. The previous state of the art is 36.85% error, in this paper: http://www.cs.toronto.edu/~nitish/treebasedpriors.pdf Speaking of which, you probably want to ask to have your results added to this page, to make sure you get cited: http://rodrigob.github.io/are_we_there_yet/build/ Section 4.4 http://arxiv.org/pdf/1312.6082.pdf gets an error rate of only 2.16% with convolutional maxout + convolutional rectifiers + dropout. Also, when averaging the output of many nets, the DropConnect paper gets down to 1.94% (even when not using dropout / DropConnect). Your results are still impressive but I think it’s worth including these results in the table for most accurate context. Section 4.5 I think the table entries should be sorted by accuracy, even if that means your method won’t be at the bottom. Section 4.6 It’s good that you’ve shown that the majority of the performance improvement comes from NIN rather than global average pooling. It’s also interesting that you’ve shown that moving from a densely connected layer to GAP regularizes the net more than adding dropout to the densely connected layer does. Section 4.7 What is the difference between the left panel and the right panel? Are these just examples of different images, or is there a difference in the experimental setup?
ylE6yojDR5yqX
Network In Network
[ "Min Lin", "Qiang Chen", "Shuicheng Yan" ]
We propose a novel network structure called 'Network In Network' (NIN) to enhance the model discriminability for local receptive fields. The conventional convolutional layer uses linear filters followed by a nonlinear activation function to scan the input. Instead, we build micro neural networks with more complex structures to handle the variance of the local receptive fields. We instantiate the micro neural network with a nonlinear multiple layer structure which is a potent function approximator. The feature maps are obtained by sliding the micro networks over the input in a similar manner of CNN and then fed into the next layer. The deep NIN is thus implemented as stacking of multiple sliding micro neural networks. With the enhanced local modeling via micro network, we are able to utilize global average pooling over feature maps in the classification layer, which is more interpretable and less prone to overfitting than traditional fully connected layers. We demonstrated state-of-the-art classification performances with NIN on CIFAR-10, CIFAR-100 and SVHN datasets.
[ "network", "nin", "local receptive fields", "input", "micro neural networks", "feature maps", "network network", "novel network structure", "model discriminability", "conventional convolutional layer" ]
https://openreview.net/pdf?id=ylE6yojDR5yqX
https://openreview.net/forum?id=ylE6yojDR5yqX
0Xn3MvOjNPuwE
review
1,391,844,540,000
ylE6yojDR5yqX
[ "everyone" ]
[ "anonymous reviewer 5dc9" ]
ICLR.cc/2014/conference
2014
title: review of Network In Network review: > - A brief summary of the paper's contributions, in the context of prior work. Convolutional neural networks have been an essential part of the recent breakthroughs deep learning has made on pattern recognition problems such as object detection and speech recognition. Typically, such networks consist of convolutional layers (where copies of the same neuron look at different patches of the same image), pooling layers, normal fully-connected layers, and finally a softmax layer. This paper modifies the architecture in two ways. Firstly, the authors explore an extremely natural generalization of convolutional layers by changing the unit of convolution: instead of running a neuron in lots of locations, they run a 'micro network.' Secondly, instead of having fully-connected layers, they have features generated by the final convolutional layer correspond to categories, and perform global average pooling before feeding the features into a softmax layer. Dropout is used between mlpconv layers. The paper reports new state-of-the-art results with this modified architecture on a variety of benchmark datasets: CIFAR-10, CIFAR-100, and SVHN. They also achieve near state-of-the-art performance on MNIST. > - An assessment of novelty and quality. The reviewer is not an expert but believes this to be the first use of the 'Network In Network' architecture in the literature. The most similar thing the reviewer is aware of is work designing more flexible neurons and using them in convolutional layers (eg. maxout by Goodfellow et al, cited in this paper). The difference between a very flexible neuron and a small network with only one output may become a matter of interpretation at some point. The paper very clearly outlines the new architecture, the experiments performed, and the results. > - A list of pros and cons (reasons to accept/reject). Pros: * The work is performed in an important and active area. * The paper explores a very natural generalization of convolutional layers. It's really nice to have this so thoroughly explored. * The authors perform experiments to understand how global average pooling affect networks independently of mlpconv layers. * The paper reports new state-of-the-art results on several standard datasets. * The paper is clearly written. Cons: * All the datasets the model is tested on are classification of rather small images (32x32 and 28x28). One could imagine a few stories where the mlpconv layers would have a comparative advantage on small images (eg. the small size makes having lots of convolutional layers tricky, so it's really helpful to have each layer be more powerful individually). If this was the case, mlpconv would still be useful and worth publishing, but it would be a bit less exciting. That said, it clearly wouldn't be reasonable to demand the architecture be tested on ImageNet -- the reviewer is just very curious. * It would also be nice to know what happens if you apply dropout to the entire model instead of just between mlpconv layers. (Again, the reviewer is just curious.)
ylE6yojDR5yqX
Network In Network
[ "Min Lin", "Qiang Chen", "Shuicheng Yan" ]
We propose a novel network structure called 'Network In Network' (NIN) to enhance the model discriminability for local receptive fields. The conventional convolutional layer uses linear filters followed by a nonlinear activation function to scan the input. Instead, we build micro neural networks with more complex structures to handle the variance of the local receptive fields. We instantiate the micro neural network with a nonlinear multiple layer structure which is a potent function approximator. The feature maps are obtained by sliding the micro networks over the input in a similar manner of CNN and then fed into the next layer. The deep NIN is thus implemented as stacking of multiple sliding micro neural networks. With the enhanced local modeling via micro network, we are able to utilize global average pooling over feature maps in the classification layer, which is more interpretable and less prone to overfitting than traditional fully connected layers. We demonstrated state-of-the-art classification performances with NIN on CIFAR-10, CIFAR-100 and SVHN datasets.
[ "network", "nin", "local receptive fields", "input", "micro neural networks", "feature maps", "network network", "novel network structure", "model discriminability", "conventional convolutional layer" ]
https://openreview.net/pdf?id=ylE6yojDR5yqX
https://openreview.net/forum?id=ylE6yojDR5yqX
uV5Nz0Zn3dzni
review
1,392,026,460,000
ylE6yojDR5yqX
[ "everyone" ]
[ "Dong-Hyun Lee" ]
ICLR.cc/2014/conference
2014
review: I doubt that the mlpconv layers can be easily implemented by successive 1x1 conv layers. In a 1x1 conv layer, the lower feature maps and the upper feature maps at each location are fully-connected. For examples, 5x5 conv - 1x1 conv - 1x1 conv is equivalent to 5x5 mlpconv layers with 3 local layers. Of course, this work is still interesting and valuable even though my thinking is correct. But in that case, it can be easily implemented by the ordinary CNN packages.
ylE6yojDR5yqX
Network In Network
[ "Min Lin", "Qiang Chen", "Shuicheng Yan" ]
We propose a novel network structure called 'Network In Network' (NIN) to enhance the model discriminability for local receptive fields. The conventional convolutional layer uses linear filters followed by a nonlinear activation function to scan the input. Instead, we build micro neural networks with more complex structures to handle the variance of the local receptive fields. We instantiate the micro neural network with a nonlinear multiple layer structure which is a potent function approximator. The feature maps are obtained by sliding the micro networks over the input in a similar manner of CNN and then fed into the next layer. The deep NIN is thus implemented as stacking of multiple sliding micro neural networks. With the enhanced local modeling via micro network, we are able to utilize global average pooling over feature maps in the classification layer, which is more interpretable and less prone to overfitting than traditional fully connected layers. We demonstrated state-of-the-art classification performances with NIN on CIFAR-10, CIFAR-100 and SVHN datasets.
[ "network", "nin", "local receptive fields", "input", "micro neural networks", "feature maps", "network network", "novel network structure", "model discriminability", "conventional convolutional layer" ]
https://openreview.net/pdf?id=ylE6yojDR5yqX
https://openreview.net/forum?id=ylE6yojDR5yqX
-9Bh-f15XcPtR
review
1,392,601,560,000
ylE6yojDR5yqX
[ "everyone" ]
[ "Çağlar Gülçehre" ]
ICLR.cc/2014/conference
2014
review: That is an interesting paper. I have a few suggestions and comments about it: (i) We used the same architecture in a paper publised at ICLR 2013 [1] for a specific problem and we called it as SMLP. Two differences in their approach from our paper are, for NIN authors stack several layers of locally connected MLPs(mlpconv) with tied weights, whereas we used only one layer of mlpconv and we didn't use Global Average Pooling. However sliding neural network over an image to do detection/classification is quiet old [2]. I think authors should cite those papers. (ii) Moreover I think authors should provide more details about their experiments and hyperparameters that they have used (such as size of the local receptive fields, size of the strides). (iii) A speed comparison between regular convolutional neural networks and NIN would be also interesting. [1] Gülçehre, Çağlar, and Yoshua Bengio. 'Knowledge matters: Importance of prior information for optimization.' arXiv preprint arXiv:1301.4083 (2013). [2] Rowley, Henry A., Shumeet Baluja, and Takeo Kanade. Human face detection in visual scenes. Pittsburgh, PA: School of Computer Science, Carnegie Mellon University, 1995.
ylE6yojDR5yqX
Network In Network
[ "Min Lin", "Qiang Chen", "Shuicheng Yan" ]
We propose a novel network structure called 'Network In Network' (NIN) to enhance the model discriminability for local receptive fields. The conventional convolutional layer uses linear filters followed by a nonlinear activation function to scan the input. Instead, we build micro neural networks with more complex structures to handle the variance of the local receptive fields. We instantiate the micro neural network with a nonlinear multiple layer structure which is a potent function approximator. The feature maps are obtained by sliding the micro networks over the input in a similar manner of CNN and then fed into the next layer. The deep NIN is thus implemented as stacking of multiple sliding micro neural networks. With the enhanced local modeling via micro network, we are able to utilize global average pooling over feature maps in the classification layer, which is more interpretable and less prone to overfitting than traditional fully connected layers. We demonstrated state-of-the-art classification performances with NIN on CIFAR-10, CIFAR-100 and SVHN datasets.
[ "network", "nin", "local receptive fields", "input", "micro neural networks", "feature maps", "network network", "novel network structure", "model discriminability", "conventional convolutional layer" ]
https://openreview.net/pdf?id=ylE6yojDR5yqX
https://openreview.net/forum?id=ylE6yojDR5yqX
ttdA6p-ZAy2v8
comment
1,392,667,980,000
of5C4EXSDnUSQ
[ "everyone" ]
[ "Min Lin" ]
ICLR.cc/2014/conference
2014
reply: Thanks for your detailed comments. The typos will be corrected in the coming version and we address the other comments below: Question: Did you ever try NIN without dropout: Yes, but only on CIFAR-10 as CIFAR-10 is the first dataset I used to test the ideas. The reason why we do not explain about dropout is that we think dropout is a generally used regularization method. Any model with sufficient parameters (such as maxout you mentioned) may overfit to training data, which results in not so good testing performance. The same is true with NIN. For CIFAR-10 without dropout, the error rate is 14.51%, which is almost 4% worse than NIN with dropout. It already surpasses many previous state-of-arts with regularizer (except for maxout). We will add this result in the coming version. For a fair comparison, we should compare this performance with maxout without dropout, but unfortunately, maxout performance without dropout is not available in the maxout paper, which is also the main reason we did not report NIN without dropout. It is suggested in the comment that the performance may involve the interaction between NIN and dropout, and suggested that the grouping of dropout might be the reason. We found that applying dropout only within the micro-net is not doing as good as putting dropout in between mlpconv layers. (moving the two dropout layers into the micro-net results in a performance of 14.10%). Our interpretation is that the regularization effect of dropout is on the weights, as Wager et al. showed in 'Dropout Training as Adaptive Regularization'. In NIN which has no fully connected layers, most parameters reside in the convolution layer, which is why dropout is applied to the inputs of those layers. In comparison, the number of parameters within the micro-net is negligible. Therefore, rather than saying the way we group dropout with NIN is the reason of the good performance, we would say that dropout acts as a general regularizer. How dropout should be applied on each layer differs among models and is not well understood yet. How dropout should be applied on a model makes another important story itself. From the above, we argue that NIN itself is a good model even without dropout, how to apply dropout on a network is a general question, but not specific to NIN. Abstract: CNN filter acts as a GLM for local image patches and it is a discriminative binary classification model. Adding nonlinearity enhances the potential discriminative power of the model for local image patches within the receptive field of the convolution neuron. We'll refine the language in the coming version. Introduction: Paragraph 1: We'll revise the paper and use less certain language when discribing the output value as probability of a specific feature. What we mean is that in the ideal case, it can be the probabilities of latent concepts. I fully agree that the values are measurements rather than probabilities, but again, if ideally the value is highly correlated with the probability, it would be a very good model. I think it is a goal more than a fact. NIN can achieve the goal better than GLM. Paragraph 2: Please see replies to Section 2. Paragraph 3: NIN is a more general structure, as mentioned in the paper. Other nonlinear networks can also be employed to incoporate different priors on the data distribution. For example, RBF assumes a gaussian mixture on the data. Mlpconv is one instantiation of NIN. Last Paragraph: There is a softmax normalization anyways. The high or low confidence is just relative. Section 2: Note that unlike maxout units which can be applied to either convolution or non-convolution structures, NIN has only convolution version; the non-convolution version of NIN degrades to an MLP. In the non-convolution case, it is equivalent to taking any two layers from the MLP and say it forms a non-linear layer. Thus for MLP, your argument is correct: it does not matter whether the first layer of the network is a linear model or not; multilayers of features overcome the limitation of the linear detector. However, it is not true for convolution structure. Stacking convolution layers is different from stacking fully connected layers. Higher convolution layers cover larger spatial regions of the input than lower layers. The stacked convolution layer does not form multilayers of features as is in mlp. To avoid the limitation of the GLM, NIN forms multilayers of features on a local patch. In my opinion, CNN has two functionalities: 1. Partition 2. Abstraction Partition: In the object recognition case, lower layers are smaller parts and higher layers learn the deformation relationship between the parts. Abstraction: In traditional CNN, the abstraction of a local patch is done using GLM. Better abstraction of a local patch in the current convolution layer can reduce combinatorial explosion in the next layer. Regarding maxout: I think it does not matter whether the positive side or the negative side is defined as the active side; they are just symmetric. For any maxout network, we can construct a minout network by reversing the sign of the weights every other layer. As minout is equivalent to maxout, you can consider a minout network and then the positive side is the active side. Section 3.1: We'll refine the statements. The information we want to convey is that we can incoporate different data priors by choosing the micro-net. For example, RBF models the data in a Gaussian Mixture style, while the GLM in MLP assumes linear subspace structure. Section 4.1: Please see our response to 'Did you ever try NIN without dropout?' Section 4.2 to 4.5: We will revise these in the coming version. Section 4.7: They are just examples of different images.
ylE6yojDR5yqX
Network In Network
[ "Min Lin", "Qiang Chen", "Shuicheng Yan" ]
We propose a novel network structure called 'Network In Network' (NIN) to enhance the model discriminability for local receptive fields. The conventional convolutional layer uses linear filters followed by a nonlinear activation function to scan the input. Instead, we build micro neural networks with more complex structures to handle the variance of the local receptive fields. We instantiate the micro neural network with a nonlinear multiple layer structure which is a potent function approximator. The feature maps are obtained by sliding the micro networks over the input in a similar manner of CNN and then fed into the next layer. The deep NIN is thus implemented as stacking of multiple sliding micro neural networks. With the enhanced local modeling via micro network, we are able to utilize global average pooling over feature maps in the classification layer, which is more interpretable and less prone to overfitting than traditional fully connected layers. We demonstrated state-of-the-art classification performances with NIN on CIFAR-10, CIFAR-100 and SVHN datasets.
[ "network", "nin", "local receptive fields", "input", "micro neural networks", "feature maps", "network network", "novel network structure", "model discriminability", "conventional convolutional layer" ]
https://openreview.net/pdf?id=ylE6yojDR5yqX
https://openreview.net/forum?id=ylE6yojDR5yqX
BawVzlgW2yDbX
comment
1,393,585,440,000
mma8mUbFZTmHO
[ "everyone" ]
[ "Min Lin" ]
ICLR.cc/2014/conference
2014
reply: - Authors would be better off using standard terminology, like I did above, that makes reading the paper easier. It is true that 1x1 convolution is an easier to understand explanation regarding the architecture of NIN. However, regarding the motivation of the architecture and the mechanism why this architecture work, it is better to explain the architecture as a micro mlp convolving the underlying data. Another explanation of the structure is cross-channel parameteric pooling, as each of the output feature maps is a weighted summation of the channels in the input feature maps. We will add the 1x1 convolution explanations in the coming version for easier understanding of the architecture. - Some space is unnecessarily taken up by issues that are irrelevant/speculative, like discussing that this architecture allows for feature filters that are 'universal function approximators.' Do we have any evidence this is actually needed for performance? In the introduction of the coming version, it is better explained why universal function approximator is prefered to GLM. The discussion is necessary because it is the motivation of proposing this architecture, and it is our explanation why NIN can achieve a good performance. We also refer to maxout as a convex function approximator in our paper, and we think maxout and NIN are both evidences that a potent function approximator is better than GLM. - In section 3.2 they say that last averaged layer is fed into softmax, but this contradicts Figure 4 where it seems that last layer features actually correspond to classes, and no softmax is needed. I assumed the later was the intention. 1. Each node in the last layer corresponds to one of the classes. 2. The values of these nodes are softmax normalized so that the sum equals one. I think there is no incompatibility between the above two. - Following sentence is unclear, consider expanding or removing: 'A vectorized view of the global average pooling is that the output of the last mlpconv layer is forced into orthogonal subspaces for different categories of inputs' Global average pooling is equal to vectorizing the feature maps and do a linear multiplication with a predefined matrix, the rows of the matrix lies within orthogonal linear subspaces. We'll remove this sentence in the coming version. - Most serious shortcoming of this paper is lack of detailed explanation of architecture. All I have to go on is the picture in Figure 2, which looks like 3 spatial pooling layers and 6 convolutional layers. Authors need to provide following information for each layer -- filter size/pooling size/stride/number of features. Ideally it would be in a succinct format like Table 2 of 'OverFeat' paper (1312.6229). We have implemented NIN idea on our own network we used for SVHN and got worse results. Since detailed architecture spec is missing, I can't tell if it's the problem of the idea or the particulars of the network we used. The details of NIN used for the benchmark datasets will be in the supplementary material that will be added in the comming version. The code (derived from cuda-convnet), the definition files and parameter settings are published and will be completed on my github (https://github.com/mavenlin/cuda-convnet)
ylE6yojDR5yqX
Network In Network
[ "Min Lin", "Qiang Chen", "Shuicheng Yan" ]
We propose a novel network structure called 'Network In Network' (NIN) to enhance the model discriminability for local receptive fields. The conventional convolutional layer uses linear filters followed by a nonlinear activation function to scan the input. Instead, we build micro neural networks with more complex structures to handle the variance of the local receptive fields. We instantiate the micro neural network with a nonlinear multiple layer structure which is a potent function approximator. The feature maps are obtained by sliding the micro networks over the input in a similar manner of CNN and then fed into the next layer. The deep NIN is thus implemented as stacking of multiple sliding micro neural networks. With the enhanced local modeling via micro network, we are able to utilize global average pooling over feature maps in the classification layer, which is more interpretable and less prone to overfitting than traditional fully connected layers. We demonstrated state-of-the-art classification performances with NIN on CIFAR-10, CIFAR-100 and SVHN datasets.
[ "network", "nin", "local receptive fields", "input", "micro neural networks", "feature maps", "network network", "novel network structure", "model discriminability", "conventional convolutional layer" ]
https://openreview.net/pdf?id=ylE6yojDR5yqX
https://openreview.net/forum?id=ylE6yojDR5yqX
eU5B1c8wfT1-H
comment
1,393,766,880,000
eeB4SMv0rsy1X
[ "everyone" ]
[ "Min Lin" ]
ICLR.cc/2014/conference
2014
reply: Hi Jost, I initialized the hyperparameter according to the parameters released by the maxout paper. For CIFAR-10, there are two things I tuned, one is the weight decay, and the other is the kernel size of the last layer (3x3 instead of 5x5). Tuning the weight decay gives me most of the performance. The other settings, such as 5x5 kernel size instead of 8x8; I just set them once and they were not tuned for performance. I think the t tunable range for kernel size is quite small. It depends on the size of the object within the image. I've no idea whether it would affect the performance that much. I'm very interested about this, looking forward to seeing the effect of hyperparameters.
ylE6yojDR5yqX
Network In Network
[ "Min Lin", "Qiang Chen", "Shuicheng Yan" ]
We propose a novel network structure called 'Network In Network' (NIN) to enhance the model discriminability for local receptive fields. The conventional convolutional layer uses linear filters followed by a nonlinear activation function to scan the input. Instead, we build micro neural networks with more complex structures to handle the variance of the local receptive fields. We instantiate the micro neural network with a nonlinear multiple layer structure which is a potent function approximator. The feature maps are obtained by sliding the micro networks over the input in a similar manner of CNN and then fed into the next layer. The deep NIN is thus implemented as stacking of multiple sliding micro neural networks. With the enhanced local modeling via micro network, we are able to utilize global average pooling over feature maps in the classification layer, which is more interpretable and less prone to overfitting than traditional fully connected layers. We demonstrated state-of-the-art classification performances with NIN on CIFAR-10, CIFAR-100 and SVHN datasets.
[ "network", "nin", "local receptive fields", "input", "micro neural networks", "feature maps", "network network", "novel network structure", "model discriminability", "conventional convolutional layer" ]
https://openreview.net/pdf?id=ylE6yojDR5yqX
https://openreview.net/forum?id=ylE6yojDR5yqX
d_-SeKVCSQ5i0
comment
1,393,888,440,000
ttdA6p-ZAy2v8
[ "everyone" ]
[ "anonymous reviewer 5205" ]
ICLR.cc/2014/conference
2014
reply: I think it would be perfectly valid to report your result on CIFAR-10 without dropout. It would definitely be nice to have a fair comparison between maxout and NIN without dropout but the NIN number alone is still interesting.
ylE6yojDR5yqX
Network In Network
[ "Min Lin", "Qiang Chen", "Shuicheng Yan" ]
We propose a novel network structure called 'Network In Network' (NIN) to enhance the model discriminability for local receptive fields. The conventional convolutional layer uses linear filters followed by a nonlinear activation function to scan the input. Instead, we build micro neural networks with more complex structures to handle the variance of the local receptive fields. We instantiate the micro neural network with a nonlinear multiple layer structure which is a potent function approximator. The feature maps are obtained by sliding the micro networks over the input in a similar manner of CNN and then fed into the next layer. The deep NIN is thus implemented as stacking of multiple sliding micro neural networks. With the enhanced local modeling via micro network, we are able to utilize global average pooling over feature maps in the classification layer, which is more interpretable and less prone to overfitting than traditional fully connected layers. We demonstrated state-of-the-art classification performances with NIN on CIFAR-10, CIFAR-100 and SVHN datasets.
[ "network", "nin", "local receptive fields", "input", "micro neural networks", "feature maps", "network network", "novel network structure", "model discriminability", "conventional convolutional layer" ]
https://openreview.net/pdf?id=ylE6yojDR5yqX
https://openreview.net/forum?id=ylE6yojDR5yqX
eeB4SMv0rsy1X
review
1,393,555,560,000
ylE6yojDR5yqX
[ "everyone" ]
[ "Jost Tobias Springenberg" ]
ICLR.cc/2014/conference
2014
review: Interesting Paper. I think there are a lot of possibilities hidden in the Ideas brought up here - as well as in the general Idea brought up by the maxout work. I have a short comment to make regarding the performance that you achieve with the 'Network in Network' model: Although I do not think that this should influence the decision on this paper (nor do I think it takes anything away from the Network in Network idea) I want to make you aware that I believe a large part of your performance increase over maxout stems from your choice of hyperparameters. I am currently running a hyperparameter search for maxout for a paper submitted to ICLR (the 'Improving Deep Neural Networks with Probabilistic Maxout Units' paper). The preliminary best result that I obtained for a maxout network on CIFAR-10 without data augmentation (using the same amount of units per layer as in the original maxout paper) is 10.92 % error. If I understand it correctly then this is approximately the same as the NiN model with a fully connected layer. The hyperparameter settings for this model are very similar to the settings I assume were used in your paper (based on the parameter file you posted here https://github.com/mavenlin/cuda-convnet/blob/master/NIN/cifar-10_def). The most crucial ingredient seems to be the pooling and filter/kernel size. I will post more details on the hyperparameter settings in the discussion on the 'Probabilistic Maxout' paper.
ylE6yojDR5yqX
Network In Network
[ "Min Lin", "Qiang Chen", "Shuicheng Yan" ]
We propose a novel network structure called 'Network In Network' (NIN) to enhance the model discriminability for local receptive fields. The conventional convolutional layer uses linear filters followed by a nonlinear activation function to scan the input. Instead, we build micro neural networks with more complex structures to handle the variance of the local receptive fields. We instantiate the micro neural network with a nonlinear multiple layer structure which is a potent function approximator. The feature maps are obtained by sliding the micro networks over the input in a similar manner of CNN and then fed into the next layer. The deep NIN is thus implemented as stacking of multiple sliding micro neural networks. With the enhanced local modeling via micro network, we are able to utilize global average pooling over feature maps in the classification layer, which is more interpretable and less prone to overfitting than traditional fully connected layers. We demonstrated state-of-the-art classification performances with NIN on CIFAR-10, CIFAR-100 and SVHN datasets.
[ "network", "nin", "local receptive fields", "input", "micro neural networks", "feature maps", "network network", "novel network structure", "model discriminability", "conventional convolutional layer" ]
https://openreview.net/pdf?id=ylE6yojDR5yqX
https://openreview.net/forum?id=ylE6yojDR5yqX
RGuEVrAvQgNLZ
comment
1,392,668,100,000
uV5Nz0Zn3dzni
[ "everyone" ]
[ "Min Lin" ]
ICLR.cc/2014/conference
2014
reply: Yes, in the node sharing case (which is used in the experiment of this paper), it is equivalent to convolution with kernel size 1. By the way, the overfeat paper submitted in iclr2014 uses 1x1 convolution kernel in the last layer. It is true you can use the convolution function in ordinary CNN packages, but the most efficient way is to use matrix multiplication functions in cublas.
ylE6yojDR5yqX
Network In Network
[ "Min Lin", "Qiang Chen", "Shuicheng Yan" ]
We propose a novel network structure called 'Network In Network' (NIN) to enhance the model discriminability for local receptive fields. The conventional convolutional layer uses linear filters followed by a nonlinear activation function to scan the input. Instead, we build micro neural networks with more complex structures to handle the variance of the local receptive fields. We instantiate the micro neural network with a nonlinear multiple layer structure which is a potent function approximator. The feature maps are obtained by sliding the micro networks over the input in a similar manner of CNN and then fed into the next layer. The deep NIN is thus implemented as stacking of multiple sliding micro neural networks. With the enhanced local modeling via micro network, we are able to utilize global average pooling over feature maps in the classification layer, which is more interpretable and less prone to overfitting than traditional fully connected layers. We demonstrated state-of-the-art classification performances with NIN on CIFAR-10, CIFAR-100 and SVHN datasets.
[ "network", "nin", "local receptive fields", "input", "micro neural networks", "feature maps", "network network", "novel network structure", "model discriminability", "conventional convolutional layer" ]
https://openreview.net/pdf?id=ylE6yojDR5yqX
https://openreview.net/forum?id=ylE6yojDR5yqX
jsD0sli7uUsvW
comment
1,393,890,420,000
BawVzlgW2yDbX
[ "everyone" ]
[ "anonymous reviewer 5205" ]
ICLR.cc/2014/conference
2014
reply: Do you think you could post the revised version of your paper soon? We have until Mar 7 to discuss it. If you could post the revised version before then I'm likely to upgrade my rating of the paper. I don't feel comfortable upgrading the rating just based on discussions on the forum though. Feel free to post the updated version on a separate website so we don't need to wait for it to be approved on ArXiv.
nF5CFb0ZQBFDr
Sequentially Generated Instance-Dependent Image Representations for Classification
[ "Matthieu Cord", "patrick gallinari", "Nicolas Thome", "Ludovic Denoyer", "Gabriel Dulac-Arnold" ]
In this paper, we investigate a new framework for image classification that adaptively generates spatial representations. Our strategy is based on a sequential process that learns to explore the different regions of any image in order to infer its category. In particular, the choice of regions is specific to each image, directed by the actual content of previously selected regions.The capacity of the system to handle incomplete image information as well as its adaptive region selection allow the system to perform well in budgeted classification tasks by exploiting a dynamicly generated representation of each image. We demonstrate the system's abilities in a series of image-based exploration and classification tasks that highlight its learned exploration and inference abilities.
[ "image", "system", "image representations", "classification", "classification tasks", "new framework", "image classification", "spatial representations", "strategy", "sequential process" ]
https://openreview.net/pdf?id=nF5CFb0ZQBFDr
https://openreview.net/forum?id=nF5CFb0ZQBFDr
ZEmlDdfwa3ELS
comment
1,392,155,820,000
-nuS-uE1S_-5F
[ "everyone" ]
[ "Gabriel Dulac-Arnold" ]
ICLR.cc/2014/conference
2014
reply: First of all, thank you for your time spent reviewing this article and for your extensive comments. I've added a couple clarifications regarding experiments on the PPMI dataset, you were indeed correct in your assumptions. I hope the experiments are now easier to understand. Let me respond to your 4 points: 1) In the case of PPMI, classification of an image as one where humans are playing an instrument vs. not playing can be done using only regions concentrating on the instrument's interface with the human (mouth, hand, etc.). Any other regions in the image simply induce noise. For example, if we choose the static policy of looking only at the 4 central regions with the SVM classifier, we actually increase classification accuracy compared to looking at all regions. Additionally, the difference between the two datasets is that the 15 scenes dataset's task of detecitng the general environment in which the image was taken means that classification information is much less concentrated in specific regions of the image, but rather spread out over the entire image. This may also explain why there is not such a strong advantage in using our method vs a random region selection policy for the 15 scenes dataset. I've added a brief mention of these aspects to the article. 2) Indeed, this behaviour would be most intuitive, however one possible interpretation of these resuls is that regions preferred by B=8 are very informative, but only if /all/ can be acquired, and if only a subset can be acquired then it is better to select region (1,3). Of course this is pure conjecture, it is unfortunately very difficult to understand /why/ the algorithm performs the trade-offs that it does, but this is indeed an interesting remark, and warrants further investigation. 3) If I understand correctly, once we would have trained the chain a first time, we could run the training examples down the chain, and train the final classifier with the new distribution of provided regions, and then re-train the chain again using this new final classifier. This is something we've attempted without significant increase in performance, and significantly increases learning complexity. 4) Ideally, if there is some way to estimate the 'best' of the 'bad' actions during training, then it might be a good idea to use this as a positive training example, but ideally with an associated weight to indicate that it is however not as good as an actually optimal action. In the case of a binary classification task there is no obvious way to instore an order amongst all the 'bad' actions since they all result ultimately in incorrect classification, however if there is some structure on the label set with a similarity measure, then this would be important information to leverage, thus allowing us to penalize the classifier with a negative reward related to how 'far off' the classifier's prediction was in label space.
nF5CFb0ZQBFDr
Sequentially Generated Instance-Dependent Image Representations for Classification
[ "Matthieu Cord", "patrick gallinari", "Nicolas Thome", "Ludovic Denoyer", "Gabriel Dulac-Arnold" ]
In this paper, we investigate a new framework for image classification that adaptively generates spatial representations. Our strategy is based on a sequential process that learns to explore the different regions of any image in order to infer its category. In particular, the choice of regions is specific to each image, directed by the actual content of previously selected regions.The capacity of the system to handle incomplete image information as well as its adaptive region selection allow the system to perform well in budgeted classification tasks by exploiting a dynamicly generated representation of each image. We demonstrate the system's abilities in a series of image-based exploration and classification tasks that highlight its learned exploration and inference abilities.
[ "image", "system", "image representations", "classification", "classification tasks", "new framework", "image classification", "spatial representations", "strategy", "sequential process" ]
https://openreview.net/pdf?id=nF5CFb0ZQBFDr
https://openreview.net/forum?id=nF5CFb0ZQBFDr
-nuS-uE1S_-5F
review
1,391,488,800,000
nF5CFb0ZQBFDr
[ "everyone" ]
[ "anonymous reviewer 4a3d" ]
ICLR.cc/2014/conference
2014
title: review of Sequentially Generated Instance-Dependent Image Representations for Classification review: This paper describes a method to select the most relevant grid regions from an input image to be used for classification. This is accomplished by training a chain of region selection predictors, each one of which outputs the k'th region to take, given the image features from k-1 already-selected regions. Each selector is trained to choose a region that leads to an eventual correct classification, given already-trained downstream selectors. Since downstream predictions are required for training, the chain is trained last-to-first, and random region selection is used to generate training inputs at each stage. Both the conditional region selection chain and its training method are interesting contributions. The method is evaluated on two tasks, playing vs. holding a musical instrument (PPMI) and outdoor scene classification (15-Scenes). Here, I wish the authors were more detailed in their descriptions of these tasks. In particular, I'm a bit unsure whether the PPMI task is a 12-way classification on musical instruments, or an average of 12 binary classification tasks (playing vs holding), one for each instrument. I think it's the latter -- if so, I'm also unclear on whether a different selection/classification chain was trained and tested for each of the 12 subsets, or if a single classifier was trained over the entire dataset. Still, the proposed method beats a random selection baseline for both tasks (though not by much for 15-Scenes), and for PPMI it also beats a baseline of including all regions. The latter is a particularly nice result, since intuitively region selection should stand to help performance, yet such gains can be hard to find. Evaluating the 12- or 24-way instrument classification task for PPMI would have been good here as well, though, as there is clearly a compatibility between region selection and this data and/or task, and this may help provide insight into why that is. Pros: - Interesting and new method of region selection trained for classification - Shows a nice gain in one task and reasonable results in another - Interesting discussion sheds light on how the method operates (figs 7, 8) Cons: - Tasks and datasets could be better explained Questions: - Why does selecting 8 regions beat using all 16 for PPMI but is only about the same for 15-Scenes? Some discussion on the difference between the two datasets and their fit with region selection would be nice here. - Fig. 8: I might have expected the B=8 (right) histogram to have its highest values mostly where the B=4 (left) histogram does, since one would think the best regions would be required in both cases. However, region 3 (x=1,y=3) seems to be used with more frequency for B=4 than B=8, for example. Why does this occur? - Is it possible to continue retraining the classifier and selectors? Currently the chain is trained once, starting with the classifier and proceeding upstream. Yet by doing this, each stage must be trained on a random sample of input regions, which can include many more configurations than would be seen at test time. Could each stage (particularly the final classifier f) be iteratively retrained given *all* selectors? Would this help by adjusting the training distribution closer to the test distribution and allowing better use of resources, or might it lead to worse generalization by narrowing the training set too much? - Alg. 4 adds a sample to the training set only if it leads to a correct prediction. But what if no region has this property -- is it better to ignore these cases, or should some be included (perhaps by trying to predict the choice closest to correct)? Surely such cases will arise at test time, and the consequences of ignoring them isn't entirely clear. I suppose there's an argument to be made that the classifier will eventually fail anyway, so it's better to bail on these cases and concentrate the predictor's resources only on those where it stands a chance. But in cases where the classifier is wrong, might this also lead to more arbitrary region selection and more drastic types of mistakes (e.g. mistaking a clarinet for a harp vs a recorder)?
nF5CFb0ZQBFDr
Sequentially Generated Instance-Dependent Image Representations for Classification
[ "Matthieu Cord", "patrick gallinari", "Nicolas Thome", "Ludovic Denoyer", "Gabriel Dulac-Arnold" ]
In this paper, we investigate a new framework for image classification that adaptively generates spatial representations. Our strategy is based on a sequential process that learns to explore the different regions of any image in order to infer its category. In particular, the choice of regions is specific to each image, directed by the actual content of previously selected regions.The capacity of the system to handle incomplete image information as well as its adaptive region selection allow the system to perform well in budgeted classification tasks by exploiting a dynamicly generated representation of each image. We demonstrate the system's abilities in a series of image-based exploration and classification tasks that highlight its learned exploration and inference abilities.
[ "image", "system", "image representations", "classification", "classification tasks", "new framework", "image classification", "spatial representations", "strategy", "sequential process" ]
https://openreview.net/pdf?id=nF5CFb0ZQBFDr
https://openreview.net/forum?id=nF5CFb0ZQBFDr
QPb2hbXfGmPiJ
review
1,391,486,100,000
nF5CFb0ZQBFDr
[ "everyone" ]
[ "Gabriel Dulac-Arnold" ]
ICLR.cc/2014/conference
2014
review: Thank you for your time and effort on reviewing our article. Let me respond in a couple points to your comments: 1. Classifier complexity: In effect, the training algorithm is more complex than a standard SVM, but it only increases in complexity linearly relative to the fixed budget B and the number of windows. Inference complexity is lower than an SVM computed on the entire picture, and is B times more complex than a fixed SVM on a B-sized window. 2. With regards to the work of Larochelle & Hinton on foveal glimpse learning, there is a fundamental difference in what is being learned by the region selection algorithm. L&H learn a gaze direction model that greedily chooses regions that are most likely to increase the final classifier's output given the current state. In our model, intermediate sub-policies learn to select regions that help their subsequent policy the most. The final sub-policy indeed learns to select a Bth region that helps classification, but each of the previous policies learns to select a region that will best disambiguate the image information given the curernt state, only ultimately helping in classification. This detail is important, as a greedy system will not spend time on image regions that are irrelevant to classification, but necessary to region selection. Typically, in Experiment 2, H&L's system must be given a starting point, and would (as far as we can tell) not be able to find it on its own.
nF5CFb0ZQBFDr
Sequentially Generated Instance-Dependent Image Representations for Classification
[ "Matthieu Cord", "patrick gallinari", "Nicolas Thome", "Ludovic Denoyer", "Gabriel Dulac-Arnold" ]
In this paper, we investigate a new framework for image classification that adaptively generates spatial representations. Our strategy is based on a sequential process that learns to explore the different regions of any image in order to infer its category. In particular, the choice of regions is specific to each image, directed by the actual content of previously selected regions.The capacity of the system to handle incomplete image information as well as its adaptive region selection allow the system to perform well in budgeted classification tasks by exploiting a dynamicly generated representation of each image. We demonstrate the system's abilities in a series of image-based exploration and classification tasks that highlight its learned exploration and inference abilities.
[ "image", "system", "image representations", "classification", "classification tasks", "new framework", "image classification", "spatial representations", "strategy", "sequential process" ]
https://openreview.net/pdf?id=nF5CFb0ZQBFDr
https://openreview.net/forum?id=nF5CFb0ZQBFDr
QKENHiYtpkQo3
review
1,390,861,200,000
nF5CFb0ZQBFDr
[ "everyone" ]
[ "anonymous reviewer 251e" ]
ICLR.cc/2014/conference
2014
title: review of Sequentially Generated Instance-Dependent Image Representations for Classification review: This paper presents an approach that considers a sequence of local representations of an image, in order to classify it into one of many labels. The approach decomposes an image into multiple non-overlapping sub-windows, and tries to find a sequence of subsets of these sub-windows that can efficiently lead to classify the image. The idea is interesting as it could potentially classify faster by concentrating only on the relevant part of the image; on the other hand, the training complexity is significantly increased (and I suspect for the approach to work we should include many more sub-windows at various scales and potentially with overlaps). The proposed approach is not compared to any other approaches, for instance the work of Larochelle and Hinton, 2010. The results are encouraging but not groundbreaking: it seems one needs to see a significant portion of the image in order to get similar or better performance than the baseline, so it's not clear the proposal works that well. I wonder if the policy used to guide the search space among sub-windows could be analyzed better.
nF5CFb0ZQBFDr
Sequentially Generated Instance-Dependent Image Representations for Classification
[ "Matthieu Cord", "patrick gallinari", "Nicolas Thome", "Ludovic Denoyer", "Gabriel Dulac-Arnold" ]
In this paper, we investigate a new framework for image classification that adaptively generates spatial representations. Our strategy is based on a sequential process that learns to explore the different regions of any image in order to infer its category. In particular, the choice of regions is specific to each image, directed by the actual content of previously selected regions.The capacity of the system to handle incomplete image information as well as its adaptive region selection allow the system to perform well in budgeted classification tasks by exploiting a dynamicly generated representation of each image. We demonstrate the system's abilities in a series of image-based exploration and classification tasks that highlight its learned exploration and inference abilities.
[ "image", "system", "image representations", "classification", "classification tasks", "new framework", "image classification", "spatial representations", "strategy", "sequential process" ]
https://openreview.net/pdf?id=nF5CFb0ZQBFDr
https://openreview.net/forum?id=nF5CFb0ZQBFDr
bbUH2ilEBoLmK
review
1,391,825,400,000
nF5CFb0ZQBFDr
[ "everyone" ]
[ "anonymous reviewer 56b5" ]
ICLR.cc/2014/conference
2014
title: review of Sequentially Generated Instance-Dependent Image Representations for Classification review: This paper tackles the problem of deciding where to look on an image. The proposed solution is to start from a center region and use the information extracted to decide what region to examine next. This is trained through reinforcement learning. The paper is well organized and clear. Experiments on 2 benchmarks (15 scenes and people playing musical instruments) show that selecting a smaller number of subregions of the image does not result in a big loss of accuracy, or even improves accuracy by eliminating noise from information-poor regions. Figures 5 and 6 have unreadable annotations, this should be fixed and/or the caption under the figure fleshed out to better describe the results. The datasets are limited and this would need to be extended to more realistic datasets, but the problem tackled is important and the proposed solution is a welcome step in this direction.
nF5CFb0ZQBFDr
Sequentially Generated Instance-Dependent Image Representations for Classification
[ "Matthieu Cord", "patrick gallinari", "Nicolas Thome", "Ludovic Denoyer", "Gabriel Dulac-Arnold" ]
In this paper, we investigate a new framework for image classification that adaptively generates spatial representations. Our strategy is based on a sequential process that learns to explore the different regions of any image in order to infer its category. In particular, the choice of regions is specific to each image, directed by the actual content of previously selected regions.The capacity of the system to handle incomplete image information as well as its adaptive region selection allow the system to perform well in budgeted classification tasks by exploiting a dynamicly generated representation of each image. We demonstrate the system's abilities in a series of image-based exploration and classification tasks that highlight its learned exploration and inference abilities.
[ "image", "system", "image representations", "classification", "classification tasks", "new framework", "image classification", "spatial representations", "strategy", "sequential process" ]
https://openreview.net/pdf?id=nF5CFb0ZQBFDr
https://openreview.net/forum?id=nF5CFb0ZQBFDr
-Kr0sEe20qsTx
comment
1,392,155,640,000
bbUH2ilEBoLmK
[ "everyone" ]
[ "Gabriel Dulac-Arnold" ]
ICLR.cc/2014/conference
2014
reply: Thank your for your time spent reviewing this article, I've expanded the description of both Figures 5 & 6, but I'm not sure which part was difficult to read, the legend, or the axes? I hope the modifications were helpful.
u-IAYCzRsK-vN
Sparse similarity-preserving hashing
[ "Alex M. Bronstein", "Guillermo Sapiro", "Pablo Sprechmann", "Jonathan Masci", "Michael M. Bronstein" ]
In recent years, a lot of attention has been devoted to efficient nearest neighbor search by means of similarity-preserving hashing. One of the plights of existing hashing techniques is the intrinsic trade-off between performance and computational complexity: while longer hash codes allow for lower false positive rates, it is very difficult to increase the embedding dimensionality without incurring in very high false negatives rates or prohibiting computational costs. In this paper, we propose a way to overcome this limitation by enforcing the hash codes to be sparse. Sparse high-dimensional codes enjoy from the low false positive rates typical of long hashes, while keeping the false negative rates similar to those of a shorter dense hashing scheme with equal number of degrees of freedom. We use a tailored feed-forward neural network for the hashing function. Extensive experimental evaluation involving visual and multi-modal data shows the benefits of the proposed method.
[ "sparse", "hashing", "recent years", "lot", "attention", "nearest neighbor search", "means", "plights", "techniques", "intrinsic" ]
https://openreview.net/pdf?id=u-IAYCzRsK-vN
https://openreview.net/forum?id=u-IAYCzRsK-vN
XwIVamM38ga3N
review
1,391,787,780,000
u-IAYCzRsK-vN
[ "everyone" ]
[ "Jonathan Masci" ]
ICLR.cc/2014/conference
2014
review: We updated the paper and the new version will be available on Mon, 10 Feb, 01:00 GMT.
u-IAYCzRsK-vN
Sparse similarity-preserving hashing
[ "Alex M. Bronstein", "Guillermo Sapiro", "Pablo Sprechmann", "Jonathan Masci", "Michael M. Bronstein" ]
In recent years, a lot of attention has been devoted to efficient nearest neighbor search by means of similarity-preserving hashing. One of the plights of existing hashing techniques is the intrinsic trade-off between performance and computational complexity: while longer hash codes allow for lower false positive rates, it is very difficult to increase the embedding dimensionality without incurring in very high false negatives rates or prohibiting computational costs. In this paper, we propose a way to overcome this limitation by enforcing the hash codes to be sparse. Sparse high-dimensional codes enjoy from the low false positive rates typical of long hashes, while keeping the false negative rates similar to those of a shorter dense hashing scheme with equal number of degrees of freedom. We use a tailored feed-forward neural network for the hashing function. Extensive experimental evaluation involving visual and multi-modal data shows the benefits of the proposed method.
[ "sparse", "hashing", "recent years", "lot", "attention", "nearest neighbor search", "means", "plights", "techniques", "intrinsic" ]
https://openreview.net/pdf?id=u-IAYCzRsK-vN
https://openreview.net/forum?id=u-IAYCzRsK-vN
ZZ7TU_nWTrjcG
comment
1,392,729,600,000
mYLe3dIVG_lnh
[ "everyone" ]
[ "Jonathan Masci" ]
ICLR.cc/2014/conference
2014
reply: The revised paper is now available.
u-IAYCzRsK-vN
Sparse similarity-preserving hashing
[ "Alex M. Bronstein", "Guillermo Sapiro", "Pablo Sprechmann", "Jonathan Masci", "Michael M. Bronstein" ]
In recent years, a lot of attention has been devoted to efficient nearest neighbor search by means of similarity-preserving hashing. One of the plights of existing hashing techniques is the intrinsic trade-off between performance and computational complexity: while longer hash codes allow for lower false positive rates, it is very difficult to increase the embedding dimensionality without incurring in very high false negatives rates or prohibiting computational costs. In this paper, we propose a way to overcome this limitation by enforcing the hash codes to be sparse. Sparse high-dimensional codes enjoy from the low false positive rates typical of long hashes, while keeping the false negative rates similar to those of a shorter dense hashing scheme with equal number of degrees of freedom. We use a tailored feed-forward neural network for the hashing function. Extensive experimental evaluation involving visual and multi-modal data shows the benefits of the proposed method.
[ "sparse", "hashing", "recent years", "lot", "attention", "nearest neighbor search", "means", "plights", "techniques", "intrinsic" ]
https://openreview.net/pdf?id=u-IAYCzRsK-vN
https://openreview.net/forum?id=u-IAYCzRsK-vN
XMixXVP-xZMUW
review
1,391,901,780,000
u-IAYCzRsK-vN
[ "everyone" ]
[ "anonymous reviewer a029" ]
ICLR.cc/2014/conference
2014
title: review of Sparse similarity-preserving hashing review: This paper builds upon Siamese neural networks [Hadsell et al, CVPR06] and (f)ISta networks [Gregor et al, ICML10] to learn to embed inputs into sparse code such that code distance reflect semantic similarity. The paper is clear and refers to related work appropriately. It is relevant to the conference. Extensive empirical comparisons over CIFAR-10 and NUS/Flickr are reported. I am mainly concerned by the computational efficiency motivation and the experimental methodology. I am also surprised that no attention to sparsity is given in the experiments given the paper title. The introduction states that your motivation for sparse code mainly comes from computational efficiency. It seems of marginal importance. The r-radius search in an m dimensional space is nchoosek(m,r). With k sparse vectors, the same search now amount to flipping r/2 bits in the 1 bits and r/2 bits in the 0 bits in the query, i.e. nchoose(k,r/2)+nchoose(m-k, r/2). Both are comparable combinatorial problems. I feel that your motivation for sparse coding could come from the type of work you allude to at the end of column 1 in page 4 (by the way could you add some references there?). I have concerns about the evaluation methodology. In particular, it is not clear to me why you compare different methods with a fixed code size and radius. It seems that, in an application setting, one might get some requirement in terms of mean average precision, recall at a given precision, precision at a fix recall, expected/worst search time, etc and would validate m and r to fit these performance requirement. Fixing m and r a priori and looking at the specific precision, recall point resulting from this arbitrary choice seems far from optimal for any methods. Moreover, I would also like to stress that m, r are also a poor predictor of a method running time given that different tree balance (number of points in each node and in particular the number of empty nodes) might yield very different search time at the same (m,r). In summary, I see little motivation for picking (m,r) a priori. I am also surprised that no attention to sparsity is given in the experiments given the paper title. How was alpha validated? What is the impact of alpha on validation performance, in particular how does the validation error surface look like wrt alpha, m and r? It might also be interesting to look at the same type of results replacing L1 with L2 regularization of the representations to further justify your work. Also reporting the impact of sparsity on search time would be a must.
u-IAYCzRsK-vN
Sparse similarity-preserving hashing
[ "Alex M. Bronstein", "Guillermo Sapiro", "Pablo Sprechmann", "Jonathan Masci", "Michael M. Bronstein" ]
In recent years, a lot of attention has been devoted to efficient nearest neighbor search by means of similarity-preserving hashing. One of the plights of existing hashing techniques is the intrinsic trade-off between performance and computational complexity: while longer hash codes allow for lower false positive rates, it is very difficult to increase the embedding dimensionality without incurring in very high false negatives rates or prohibiting computational costs. In this paper, we propose a way to overcome this limitation by enforcing the hash codes to be sparse. Sparse high-dimensional codes enjoy from the low false positive rates typical of long hashes, while keeping the false negative rates similar to those of a shorter dense hashing scheme with equal number of degrees of freedom. We use a tailored feed-forward neural network for the hashing function. Extensive experimental evaluation involving visual and multi-modal data shows the benefits of the proposed method.
[ "sparse", "hashing", "recent years", "lot", "attention", "nearest neighbor search", "means", "plights", "techniques", "intrinsic" ]
https://openreview.net/pdf?id=u-IAYCzRsK-vN
https://openreview.net/forum?id=u-IAYCzRsK-vN
VVtOnEaB7_WSx
comment
1,391,604,240,000
BcgDqCiUDYcXQ
[ "everyone" ]
[ "Jonathan Masci" ]
ICLR.cc/2014/conference
2014
reply: We are grateful to reviewer Anonymous a636 for constructive comments. We reply below to the points raised by the reviewer; we will post additional results requested here and eventually update the arxiv paper. 1. We agree (and in fact note it in the paper) that when comparing dense and sparse hashes one has to compare the same number of **degrees of freedom** rather than **length**. Thus, comparing sparse and dense hashes of same length is actually less favorable to sparse hash (and despite that, we show to perform better even in this unfavorable comparison). Referring specifically to our results, closely comparable hashes would be sparse hash of length 128 and dense hash of length 48 (the exact number of degrees of freedom depends on the sparsity, which varies slightly). Because of space limitations we removed the sparsity levels from our tables. They are as follows: CIFAR10 m 48, M 16, alpha 0.01, lambda 0.1, L0 20.6% m 48, M 7, alpha 0.001, lambda 0.1, L0 39.1% m 48, M 7, alpha 0.001, lambda 0.1, L0 43.9% m 128, M 16, alpha 0.0, lambda 0.1, L0 6.0% NUS m 64, M 7, alpha 0.05, lambda 0.3, L0 17.4% m 64, M 7, alpha 0.05, lambda 1.0, L0 20.1% m 64, M 16, alpha 0.005, lambda 0.3, L0 21.7% m 256, M 4, alpha 0.05, lambda 1.0, L0 6.6% m 256, M 4, alpha 0.05, lambda 1.0, L0 9.4% m 256, M 6, alpha 0.005, lambda 0.3, L0 9.9% (here L0 means the number of non-zeros, in % of the hash length m) 2. Retrieval is done as follows: For large radius (r=m) the search is done exhaustively with complexity O(N), where N is the database size. For r=0 (exact collisions), the search is done as a LUT: the query code is fed into the lookup table, containing all entries in the database having the same code. The complexity is O(m), independent of N. For small values of r (partial collisions), the search is done as for r=0 using perturbation of the query: at most r bits of the query are changed, and then it is fed into the LUT. The final result is the union of all the retrieval results. Complexity is O(r out of m). 3. Sign in ISTA-net: Our initial formulation converted the output to a ternary representation, doubling the number of bits, and explicitly coding for -1, 0 and +1. However, we found out that the difference of this more proper encoding vs the plain output of the net was negligible and therefore we opted for the simplest solution. 4. Eq 4: It is a typo, there is a max(0, .). Thanks for pointing it out. 5. PR curves were generated using the ranking induced by the Hamming distance between the query and the database samples. In case of r<m we considered only the results falling into the Hamming ball of radius r. 6: Table 4 last line: The implementations of NN and NN-sparse hashes differ. For sparse hash, we use shrinkage and tanh whereas NN-hash uses only tanh. Additionally the two losses also differ (for sparse hash, we measure the L1 distance which we found out already induces some sparsity). These are, in our opinion, the two main differences which explain the different performance and that sparse hash does not produce exactly the same results as NN-hash for alpha=0. 7. Additional experiments: Thanks for the suggestion. We will perform the requested experiments and will post them here at a later stage. 8. We used a single iteration of ISTA-net in all experiments. 9. Fig 6 left: the dotted curve (m=128) for NN is there, right above the KSH (dotted purple). agh2 will be added in the updated version of the paper.
u-IAYCzRsK-vN
Sparse similarity-preserving hashing
[ "Alex M. Bronstein", "Guillermo Sapiro", "Pablo Sprechmann", "Jonathan Masci", "Michael M. Bronstein" ]
In recent years, a lot of attention has been devoted to efficient nearest neighbor search by means of similarity-preserving hashing. One of the plights of existing hashing techniques is the intrinsic trade-off between performance and computational complexity: while longer hash codes allow for lower false positive rates, it is very difficult to increase the embedding dimensionality without incurring in very high false negatives rates or prohibiting computational costs. In this paper, we propose a way to overcome this limitation by enforcing the hash codes to be sparse. Sparse high-dimensional codes enjoy from the low false positive rates typical of long hashes, while keeping the false negative rates similar to those of a shorter dense hashing scheme with equal number of degrees of freedom. We use a tailored feed-forward neural network for the hashing function. Extensive experimental evaluation involving visual and multi-modal data shows the benefits of the proposed method.
[ "sparse", "hashing", "recent years", "lot", "attention", "nearest neighbor search", "means", "plights", "techniques", "intrinsic" ]
https://openreview.net/pdf?id=u-IAYCzRsK-vN
https://openreview.net/forum?id=u-IAYCzRsK-vN
GG8MxUz35hkXl
comment
1,392,299,400,000
XMixXVP-xZMUW
[ "everyone" ]
[ "Jonathan Masci" ]
ICLR.cc/2014/conference
2014
reply: We are grateful to reviewer Anonymous a029 for his/her comments. We clarify below all the critical issues, and these points are addressed in the revision. We have produced additional evaluations to better convey our point; we invite all the reviewers to look at these results. Below are responses to the issues raised: 1. Please note that we *do not* use sparsity in the retrieval. After the code is constructed, we use standard retrieval procedure (LUT for small r, brute force for large r, see our answers to the previous reviews) both for dense and sparse codes. Using small r guarantees fast retrieval; however, in the dense case, it comes at the expense of the recall. We show that introducing sparsity, we get high recall for small r, thus guaranteeing both fast search and high recall, which is usually impossible with standard methods. We do believe, however, that it is also possible to take advantage of sparse codes to make the retrieval more efficient, and intend to explore this direction in future work. In particular, in our experiments we observed that the number of unique codes is significantly smaller for sparse hash compared to dense hash, which potentially allows an improvement of the data structure used for retrieval (an overwhelming number of LUT entries are empty). Here are results obtained on CIFAR10 datasets for m=10: Average number of database elements mapped to the same code (collisions): SparseHash 798.47 KSH 3.95 NNHash 4.83 SSH 1.01 DH 1.00 AGH 1.42 2. As requested by the reviewer, we have computed the timing for the experiments presented in the paper, and present the 3D plot of precision/recall/retrieval time for different methods for varying r: https://www.dropbox.com/s/td7pyc2hwwch2s1/sparsehash_timing_results.png Annotation: o (r=0), triangle (r=1), + (r=2, implemented as brute force search) We can conclude that: - search time is controlled mainly by the radius r, which also impacts the recall/precision. With dense methods, it is impossible to achieve fast search and high precision/recall. The use of sparsity makes this possible. - With sparse hash we are able to achieve orders of magnitude higher recall as well as an increase in precision for retrieval time comparable with the dense methods. - Our retrieval data structure does not currently take advantage of the code sparsity, suggesting a potentially significant reduction in search time. 3. Evaluation methodology: We find the reviewer's worry about the (m,r) being a poor predictor of the search time is very reasonable. However, we believe that the comparable search times reported for the same (m,r) settings in all methods suggests that it is not an issue in our specific experiments.
u-IAYCzRsK-vN
Sparse similarity-preserving hashing
[ "Alex M. Bronstein", "Guillermo Sapiro", "Pablo Sprechmann", "Jonathan Masci", "Michael M. Bronstein" ]
In recent years, a lot of attention has been devoted to efficient nearest neighbor search by means of similarity-preserving hashing. One of the plights of existing hashing techniques is the intrinsic trade-off between performance and computational complexity: while longer hash codes allow for lower false positive rates, it is very difficult to increase the embedding dimensionality without incurring in very high false negatives rates or prohibiting computational costs. In this paper, we propose a way to overcome this limitation by enforcing the hash codes to be sparse. Sparse high-dimensional codes enjoy from the low false positive rates typical of long hashes, while keeping the false negative rates similar to those of a shorter dense hashing scheme with equal number of degrees of freedom. We use a tailored feed-forward neural network for the hashing function. Extensive experimental evaluation involving visual and multi-modal data shows the benefits of the proposed method.
[ "sparse", "hashing", "recent years", "lot", "attention", "nearest neighbor search", "means", "plights", "techniques", "intrinsic" ]
https://openreview.net/pdf?id=u-IAYCzRsK-vN
https://openreview.net/forum?id=u-IAYCzRsK-vN
BcgDqCiUDYcXQ
review
1,391,511,480,000
u-IAYCzRsK-vN
[ "everyone" ]
[ "anonymous reviewer a636" ]
ICLR.cc/2014/conference
2014
title: review of Sparse similarity-preserving hashing review: This work presents a similarity-preserving hashing scheme to produce binary codes that work well for small-radius hamming search. The approach restricts hash codes to be sparse, thereby limiting the number of possible codes for a given length of m bits. A small-radius search within the set of valid codes will therefore have more hits, yet the total bit length can be lengthened to allow better representation of similarities. According to the authors, this is the first application of sparsity constraints in the context of binary similarity hashes. Overall I think this is a nice, well-motivated idea and corresponding implementation, with experiments on two datasets that demonstrate its effectiveness and mostly support the claims made in the motivation. As a bonus, the authors describe a further adaptation to cross-modal data. I still feel there are some links missing between the analysis, implementation and evaluation that could be made more explicit, and have quite a few questions (see below). Pros: - Well-argued idea for improving hashing by restricting the code set to enable targeting small search radii and large bit lengths - Experiments show good system performance Cons: - Links between motivational analysis, implementation and evaluation could be made more explicit - Related to that, some claims alluded to in the motivation don't appear fully supported, e.g. more efficient search for k-sparse codes doesn't seem realized beyond keeping r small Questions: - You mention comparing between k-sparse codes of length m and dense codes with the same degrees of freedom, i.e. of length log(m choose k). This seems very appropriate, but the evaluation seems to compare same-length codes between methods. Or, do the values of m reflect this comparison? m=128 v. m=48 may work if k is around 10. But I also don't see anything showing the distribution of nonzeros in the codes. - pg. 4: 'retrieving partial collisions of sparse binary vectors is ... less complex ... compared to their dense counterparts': Could this be explained in more detail? It seems a regular exhaustive hamming search is used in the implemented system (especially since there appears to be no hard limit on k, so any small change can valid for most codes). - The ISTA-net uses a sign-preserving shrink, and the outputs are binarized with tanh (also preserving sign) -- thus nonzeros of the ISTA-net can saturate to either +/- 1 depending on sign, while zeros are mapped to 0. These are 3 values, not 2, so how are they converted to a binary vector, and how does this align with the xi(x) - xi(x') in the loss function (which seems to count a penalty for comparing +1 with 0 and a double-penalty for comparing +1 with -1)? - Eqn. 4: Distance loss between codes is L1 instead of L2, and there is no max(0, .) on the negatives (but still a margin M). Are these errors or intended? - I'm not sure how the PR curves were generated: What ranking was used? - Table 4 last line: Says alpha=0; it seems the sparsity term would be disabled if alpha=0, so not sure why results here are better than NN-hash instead of about the same? Minor comments: - Would have liked to see more comparing level of sparsity vs. precision/recall for different small r. There is a bit of this in the tables, but it would be interesting to see more comprehensive measurements here. It would be great if there was a 2d array with e.g. r on the x axis and avg number of nonzeros on the y axis, for one or more fixed m. - How many layers/iterations were used in the ISTA-net? - Fig 6 left, curves for m=128 appear missing for nnhash and agh2
u-IAYCzRsK-vN
Sparse similarity-preserving hashing
[ "Alex M. Bronstein", "Guillermo Sapiro", "Pablo Sprechmann", "Jonathan Masci", "Michael M. Bronstein" ]
In recent years, a lot of attention has been devoted to efficient nearest neighbor search by means of similarity-preserving hashing. One of the plights of existing hashing techniques is the intrinsic trade-off between performance and computational complexity: while longer hash codes allow for lower false positive rates, it is very difficult to increase the embedding dimensionality without incurring in very high false negatives rates or prohibiting computational costs. In this paper, we propose a way to overcome this limitation by enforcing the hash codes to be sparse. Sparse high-dimensional codes enjoy from the low false positive rates typical of long hashes, while keeping the false negative rates similar to those of a shorter dense hashing scheme with equal number of degrees of freedom. We use a tailored feed-forward neural network for the hashing function. Extensive experimental evaluation involving visual and multi-modal data shows the benefits of the proposed method.
[ "sparse", "hashing", "recent years", "lot", "attention", "nearest neighbor search", "means", "plights", "techniques", "intrinsic" ]
https://openreview.net/pdf?id=u-IAYCzRsK-vN
https://openreview.net/forum?id=u-IAYCzRsK-vN
LRcNRh6ZNB7T9
comment
1,391,965,500,000
TR-OT62E8KhmZ
[ "everyone" ]
[ "Jonathan Masci" ]
ICLR.cc/2014/conference
2014
reply: We are grateful to Anonymous d060 for interesting comments and for appreciating our work. Some of the same comments are already addressed in the updated v2 of the arxiv report that should appear on Mon Feb 10. Since d060 has spotted the same issue as a636, we kindly ask the reviewer to also look at our previous response to a636. We will incorporate the new comments and upload a new version to arxiv. 1. Retrieval: For large radius (r=m) the search is done exhaustively with complexity O(N), where N is the database size. For r=0 (exact collisions), the search is done as a LUT: the query code is fed into the lookup table, containing all entries in the database having the same code. The complexity is O(m), independent of N. For small values of r (partial collisions), the search is done as for r=0 using perturbation of the query: at most r bits of the query are changed, and then it is fed into the LUT. The final result is the union of all the retrieval results. Complexity is O(r out of m). For this reason, one seeks to use a small radius to obtain efficient search. With dense hash, this comes at the expense of very low recall, as we explain theoretically and show experimentally. With sparse hash, we are able to control the exponential growth of the hamming ball volume, thus resulting in much higher recall. It is correctly noted by the reviewer that our method does not explicitly guarantee a fixed sparsity (it is possible to use a different NN architecture, derived from coordinate-descent pursuit CoD, to guarantee at most k non-zeros in the codes). However, we see in practice that the number of non-zeros in our codes is approximately fixed (e.g. for cifar10 sparse hash of length m=128 we get codes containing on average 7.6 +/- 2.3 non-zero bits), and the behavior of the codes is similar to the theoretical case with 'guaranteed' sparsity. To emphasize, sparsity is used to obtain codes that exhibit higher recall at low search r (in particular, r=0). The search is done in a standard way described above, without making a distinction between sparse and dense cases. Note that lack of exact control of sparsity is common in l1 optimization problems, though as mentioned above the approximate control was found to be sufficient for this application as well. 2. Formula 4: We fixed formula (4) which missed the max(0,.) term. 3. Calculating neighbors with large radii: In our experiments, we used three radii: r=0 (collisions), r=2 and r=m (full radius). In the latter setting, we used 'brute force' search, going exhaustively through all the database. 4. Parameters setting: The parameters were set empirically. We should stress we have not optimized these parameters, as we observed that setting them more or less arbitrarily provided performance significantly better than the competing dense hashing methods. Lambda is initially set to 1 and after reduced to balance the positive and negative classes if needed. A value of 0.1 in CIFAR10 equally weights the positives and negatives for example. In NUS we used 0.3 instead because each sample can belong to multiple classes and therefore distinction between pos and neg is not as clear as for CIFAR10. alpha is set to a small value such as 0.01 and decreased by a factor of 10 according to the desired sparsity level. The margin M is usually 7 and we would suggest to use this as we did extensive evaluation in previous work. In the experiments we increased it along with a higher alpha to check if this would allow better binarization. Ideally a large margin tends to saturate units and a large sparsity should further favor this phenomenon. In the multimodal case mu_1 and mu_2 are set to 0.5 to use the respective modalities as regularization for the cross-modal embedding. We would suggest to use this configuration and change it only if needed.
u-IAYCzRsK-vN
Sparse similarity-preserving hashing
[ "Alex M. Bronstein", "Guillermo Sapiro", "Pablo Sprechmann", "Jonathan Masci", "Michael M. Bronstein" ]
In recent years, a lot of attention has been devoted to efficient nearest neighbor search by means of similarity-preserving hashing. One of the plights of existing hashing techniques is the intrinsic trade-off between performance and computational complexity: while longer hash codes allow for lower false positive rates, it is very difficult to increase the embedding dimensionality without incurring in very high false negatives rates or prohibiting computational costs. In this paper, we propose a way to overcome this limitation by enforcing the hash codes to be sparse. Sparse high-dimensional codes enjoy from the low false positive rates typical of long hashes, while keeping the false negative rates similar to those of a shorter dense hashing scheme with equal number of degrees of freedom. We use a tailored feed-forward neural network for the hashing function. Extensive experimental evaluation involving visual and multi-modal data shows the benefits of the proposed method.
[ "sparse", "hashing", "recent years", "lot", "attention", "nearest neighbor search", "means", "plights", "techniques", "intrinsic" ]
https://openreview.net/pdf?id=u-IAYCzRsK-vN
https://openreview.net/forum?id=u-IAYCzRsK-vN
mYLe3dIVG_lnh
review
1,392,652,500,000
u-IAYCzRsK-vN
[ "everyone" ]
[ "Jonathan Masci" ]
ICLR.cc/2014/conference
2014
review: A new version (v3) of the paper will be available at Tue, 18 Feb 2014 01:00:00 GMT.
u-IAYCzRsK-vN
Sparse similarity-preserving hashing
[ "Alex M. Bronstein", "Guillermo Sapiro", "Pablo Sprechmann", "Jonathan Masci", "Michael M. Bronstein" ]
In recent years, a lot of attention has been devoted to efficient nearest neighbor search by means of similarity-preserving hashing. One of the plights of existing hashing techniques is the intrinsic trade-off between performance and computational complexity: while longer hash codes allow for lower false positive rates, it is very difficult to increase the embedding dimensionality without incurring in very high false negatives rates or prohibiting computational costs. In this paper, we propose a way to overcome this limitation by enforcing the hash codes to be sparse. Sparse high-dimensional codes enjoy from the low false positive rates typical of long hashes, while keeping the false negative rates similar to those of a shorter dense hashing scheme with equal number of degrees of freedom. We use a tailored feed-forward neural network for the hashing function. Extensive experimental evaluation involving visual and multi-modal data shows the benefits of the proposed method.
[ "sparse", "hashing", "recent years", "lot", "attention", "nearest neighbor search", "means", "plights", "techniques", "intrinsic" ]
https://openreview.net/pdf?id=u-IAYCzRsK-vN
https://openreview.net/forum?id=u-IAYCzRsK-vN
TR-OT62E8KhmZ
review
1,391,867,160,000
u-IAYCzRsK-vN
[ "everyone" ]
[ "anonymous reviewer d060" ]
ICLR.cc/2014/conference
2014
title: review of Sparse similarity-preserving hashing review: The authors propose to use a to use a sparse locally sensitive hash along with an appropriate hashing function. The retrieval of sparse codes has different behaviour then dense code, which results in better performance then other other methods especially at low retrieval radius where computation is cheaper. Novelty: Good (as far as I know). Quality: Good, clearly explained except few details (see below), with experimental evaluation. Details: - How do you define retrieval at a given radius for sparse codes? With two bit flips say, there are the same number of neighbours whether the code is dense or sparse. With the encoding proposed you don't have a guaranteed that only k values will be nonzero - it is not a strict bound. How do you define the neighbours - as only those that have the same or lower sparsity? - Is formula (4) correct, specifically line 2? I assume it should be more like the line 2 of eq. (3). - In the experiments, how did you calculate neighbours of such large radii - the number of neighbours grows as Choose(m, r). - There are a lot of hyper parameters: eq.4: lambda, alpha, M + eq. 5 mu_1, mu_2. How did you choose these? If your answer is 'cross validation' - how theses are a lot of parameters to cross validate. Do you have any good ways to set these? - Even though this is basic it would be good to explain in section 2(Efficient Retrieval) how is the retrieval defined - what is the situation?
YDXrDdbom9YCi
Large-scale Multi-label Text Classification - Revisiting Neural Networks
[ "Jinseok Nam", "Jungi Kim", "Iryna Gurevych", "Johannes Fürnkranz" ]
Large-scale datasets with multi-labels are becoming readily available, and the demand for large-scale multi-label classification algorithm is also increasing. In this work, we propose to utilize a single-layer Neural Networks approach in large-scale multi-label text classification tasks with recently proposed learning techniques. We carried out experiments on six textual datasets with varying characteristics and size, and show that a simple Neural Networks model equipped with recent advanced techniques for Neural Networks components such as an activation layer, optimization, and generalization techniques performs as well as or even outperforms the previous state-of-the-art approaches on large-scale datasets with diverse characteristics.
[ "neural networks", "text classification", "datasets", "available", "demand", "classification algorithm", "work", "text classification tasks", "learning techniques", "experiments" ]
https://openreview.net/pdf?id=YDXrDdbom9YCi
https://openreview.net/forum?id=YDXrDdbom9YCi
1QRFgLal6wk-f
review
1,391,466,660,000
YDXrDdbom9YCi
[ "everyone" ]
[ "anonymous reviewer 1ddd" ]
ICLR.cc/2014/conference
2014
title: review of Large-scale Multi-label Text Classification - Revisiting Neural Networks review: This paper tackles the problem of multi-label classification using a single-layer neural network. Several options are considered such as thresholding, using various transfer functions or dropouts. The paper is full of imprecisions, writing errors that make it hard to read (the paragraph 'Computational Expenses' of Section 2.2 is perhaps the worse from this point of view), even if it is possible to get most of the content. The whole paper is based on a comparison with BP-MLL, which is quite problematic for two reasons. First, it is not clear that BP-MLL is the best suitable baseline. What motivated this choice? Their pairwise exponential loss function is not standard, most ranking methods instead choose the hinge loss). What is their architecture? We need more arguments to assess this as a strong benchmark. Besides, the comparison made in the paper between the proposed network and BP-MLL is not valid: influences of architecture, choice of transfer functions and loss are all mixed. In the end of the 'Plateaus' paragraph, it is suggested that ReLUs allow to outperform BP-MLL but no experience with BP-MLL with ReLU has been conducted. Perhaps the loss of BP-MLL (PWE) could be as efficient as cross entropy with ReLU. Is BP-MLL used as a benchmark as a network or simply as a loss (exponential)? What are the results with hinge loss? Most results about neural networks are not particularly new. It is already quite well known that (1) Dropout prevent overfitting and (2) networks with ReLUs are easier to optimize than those with Tanh. In that sense, Figure 3, which is quite nice and sound, or Section 5 do not bring much new results to practitioners already used to Deep Learning (such as the ICLR audience I guess). The curves form Figure 2 are more original but I don't agree with its creation. What justifies that what is observed with such a weird networks (a single hidden unit) can transfer to more generic feed-forward NNs? Besides, the conclusions regarding the 'plateaus' are not really obvious: it seems that there are plateaus for all settings. Is is expected that the curves with tanh appear to be symetric w.r.t. the origin? Other comments: - I don't see what means 'Revisiting Neural Networks' in the title of the paper. - In Equation (2), what is the definition of $y_l$? - Legend of Fig 1(b): CE w/ ReLU -> CE W/ tanh - ReLU is used from the beginning of Section 2 but defined in Section 2.3. - How is defined Delta for ADAGRAD? - Couldn't it be interesting to try to learn thresholds using the class probabilities outputted by the network? - The metrics used in experiments could be described. - Discussion on the training efficiency of linear SVMs in Section 6.2 seems irrelevant here.
YDXrDdbom9YCi
Large-scale Multi-label Text Classification - Revisiting Neural Networks
[ "Jinseok Nam", "Jungi Kim", "Iryna Gurevych", "Johannes Fürnkranz" ]
Large-scale datasets with multi-labels are becoming readily available, and the demand for large-scale multi-label classification algorithm is also increasing. In this work, we propose to utilize a single-layer Neural Networks approach in large-scale multi-label text classification tasks with recently proposed learning techniques. We carried out experiments on six textual datasets with varying characteristics and size, and show that a simple Neural Networks model equipped with recent advanced techniques for Neural Networks components such as an activation layer, optimization, and generalization techniques performs as well as or even outperforms the previous state-of-the-art approaches on large-scale datasets with diverse characteristics.
[ "neural networks", "text classification", "datasets", "available", "demand", "classification algorithm", "work", "text classification tasks", "learning techniques", "experiments" ]
https://openreview.net/pdf?id=YDXrDdbom9YCi
https://openreview.net/forum?id=YDXrDdbom9YCi
FMBUveVoQjvA1
review
1,391,405,160,000
YDXrDdbom9YCi
[ "everyone" ]
[ "anonymous reviewer 3ccf" ]
ICLR.cc/2014/conference
2014
title: review of Large-scale Multi-label Text Classification - Revisiting Neural Networks review: The authors claim that a simple two-layer fully connected neural net can outperform or meet state of the art accuracies on large, multi-label classification tasks, using rectified linear units and dropout. They make novel use of L2 regression to compute a data-dependent threshold to map model outputs to class labels. While the paper is interesting, the baselines seem weak, in particular, the comparison against a neural net ranker with exponential versus cross-entropy loss. The authors make a point several times in the paper with which I disagree - that ranking approaches do not scale to large datasets. The existence of effective web search engines is strong evidence to the contrary. Specific comments ***************** There are many typos and grammatical errors but few that change the meaning, so below I only mention the latter. Eq. (2): For clarity's sake it might be worth mentioning that f_0 has range [0,1] and that the labels y are in {0,1}. Fig. (2): Yes, but the high slope of ReLU can also cause problems if there is noise in the data (which appears as outliers). I think that a much better comparison than the one with BP-MLL would be to compare with RankNet, a neural net ranker that uses a cross entropy error function and thus is much closer to the cost function used in the paper. It does seem to make sense to rank all outputs corresponding to positive labels above all outputs corresponding to negative; the question left open by your comparison is whether the problem lies with the use of the exponential cost in BP-MLL. Unfortunately there are at least two confounding factors that you have not separated in comparing BP-MLL versus your method: the use of ranking at all, and the choice of ranking cost function. So I think it's important to compare against a cross entropy ranking function. 'The ReLU disables negative activation so that the number of parameters to be learned decreases during the training.' - what do you mean? Section 2.3: define the Delta used in Adagrad Section 3: this is a nice idea, for choosing data dependent thresholds - is it novel (for multi label classification)? Sec. 4.1: it would be useful to give brief definitions of the ranking measures used ('Rank loss, one-error, etc.'), since, except for MAP they (or at least, these names for them) are not well known. Sec 4.2: my impression is that your choice of SVM baseline is weak (also, few details are given). Binary relevance seems like a poor choice for the multilabel task. Why not compare to SVM rankers? And since you're comparing to a nonlinear system, reporting results using nonlinear kernels would be good to be more complete (although linear SVMs usually do work well on text classification, this is a different task). Table 2: 'B and R followed by BR are to represent' should read 'B and R following BR represent' and would be even better written as simply 'B and R subscripts represent' The results in Table 2 seem to be mixed. Sometimes dropout helps, sometimes it hurts. Sometimes the SVMs win, often not. The only take-away that seems safe to me, is that if you want the best performing system, then try all these methods (and other, stronger baselines - see above) on your data, and pick the best. I think the most interesting result is that NNs (lumping with and without dropout together) beat linear SVMs, which is usually the strongest performing method for text classification. But this task is different, and so it would have made more sense to compare against SVM rankers. Given the use of ReLU transfer functions, what different did L1 regularization on the hidden activations make? Figs 3a is great. Fig 3b is a little misleading, though - you chose the one dataset where adding dropout helped. What does that curve look like on a dataset where adding dropout hurt? Section 6.1: 'the presence of a specific label may suppress or exhibit the probability of presence of other labels' - I think you just mean that some sets of labels tend to occur together, and this is ignored in binary relevance methods, but please explain this more clearly.
YDXrDdbom9YCi
Large-scale Multi-label Text Classification - Revisiting Neural Networks
[ "Jinseok Nam", "Jungi Kim", "Iryna Gurevych", "Johannes Fürnkranz" ]
Large-scale datasets with multi-labels are becoming readily available, and the demand for large-scale multi-label classification algorithm is also increasing. In this work, we propose to utilize a single-layer Neural Networks approach in large-scale multi-label text classification tasks with recently proposed learning techniques. We carried out experiments on six textual datasets with varying characteristics and size, and show that a simple Neural Networks model equipped with recent advanced techniques for Neural Networks components such as an activation layer, optimization, and generalization techniques performs as well as or even outperforms the previous state-of-the-art approaches on large-scale datasets with diverse characteristics.
[ "neural networks", "text classification", "datasets", "available", "demand", "classification algorithm", "work", "text classification tasks", "learning techniques", "experiments" ]
https://openreview.net/pdf?id=YDXrDdbom9YCi
https://openreview.net/forum?id=YDXrDdbom9YCi
jArLXVnW-4AIQ
review
1,392,664,560,000
YDXrDdbom9YCi
[ "everyone" ]
[ "Jinseok Nam" ]
ICLR.cc/2014/conference
2014
review: Thanks for the helpful reviews. We are currently working on improving our paper along your suggestions. We noticed that we have not been clear enough about a few important points, which we would like to clarify in this first reply (more details will follow later): 1. learning-to-rank vs. multi-label classification Although we frequently talk about ranking in our paper, our objective is not learning- to-rank. Learning-to-rank aims at ranking a set of objects (such as documents), whereas our goal is to assign a set of labels to a given document. (as opposed to conventional multi-class classification, where only a single label is assigned to the document). Mutli-label classification is often framed as a ranking problem, but in this case the labels need to be ranked for a given document (as opposed to ranking the documents themselves). Many commonly used loss functions for multi-label classifi- cation focus on a good ranking of the labels (a good ranking is one where all relevant labels tend to be ranked before all irrelevant labels). 2. The pairwise hinge loss and the pairwise exponential loss to minimize ranking loss It is said that the pairwise hinge loss in RankSVM [2] and the pairwise expo- nential loss in BP-MLL are natural choices as the surrogate loss for the ranking loss [4]. However, these surrogate losses are not consistent with the ranking loss that we want to minimize. 3. Are BP-MLL and BR (linear SVMs) reasonable choices for the baselines? Neural Network-based algorithms are particularly interesting for multilabel classifi- cation because they allow to model dependencies between the occurrence of labels, whereas the standard binary relevance approach assumes that the occurrence of a la- bel for an example is independent of the occurrence of other labels for this examples, an assumption that is typically wrong in practice. BP-MLL is a well-known NN archi- tecture which exploits a pairwise error function instead of the traditional cross entropy loss. The main claim that we want to make in this paper is that Neural-Network based approaches to multilabel classification may benefit from several recent advancements that have been developed in Deep Learning, as well as that the minimization of the pairwise error function may be replaced with something simpler such the cross entropy loss. Thus BP-MLL is a natural benchmark, because this is the prototypical Neural- Network-based multilabel classification algorithm, which is often used as a baseline [5]. Binary relevance, on the other hand, is in many domains not a strong baseline for the reasons discussed above, but in particular in text domains it is still commonly used and has shown very good results. Several recent works have shown that the BR approach may outperform their counterparts which consider label dependencies between labels. Also note that recent analyses has shown that ranking losses may be minimized by loss functions on the individual labels [1, 4], which may be part of an explanation why binary relevance with SVMs as base classifiers tends to often performs well in practice even though it does not take label dependencies into account. Reviewer 1 - Section 3: this is a nice idea, for choosing data dependent thresholds - is it novel (for multi-label classification)?: The basic idea of this sort of threshold has been discussed in several papers [2, 8, 6]. Instead of minimizing bipartite misclassification errors, which is sum of the number of false positives and false negatives, we use F1 score as a reference measure. - Given the use of ReLU transfer functions, what different did L1 regularization on the hidden activations make?: We follow the recent findings from [8] where deep neural networks are constructed by using ReLUs at the hidden layers, together with L1 regularization on the activations. Even though, as we expected, stronger L1 regularization makes averaged value of positive hidden activations decrease, we did not find meaningful differences. - Figs 3a is great. Fig 3b is a little misleading, though - you chose the one dataset where adding dropout helped. What does that curve look like on a dataset where adding dropout hurt?: The performance of NNs with dropout or without dropout is highly correlated with properties of datasets. When we train NNs on EUR-Lex and Delicious dataset, the networks tend to overfit severely such as the red dashed lines in Figure 3 (b). Both datasets have a relatively large number of labels compared to the number of training documents and unique terms. Precisely, on the Delicious dataset, the number of distinct labels is larger than the number of unique words, and each document is associated with nearly 20 labels on average. Additionally, one thirds of labels on EUR-Lex dataset does not appear in the training data split, that is the label distribution of training data and that of test data are different. To prevent overfitting due to such characteristics of datasets, we tried to regularize the models with L1 and L2 penalty as well as Dropout, but only works Dropout. On the rest of the datasets, even though we increase the number of units in the hidden layer up to 2000 and 4000, no such severe overfitting observed. We conjecture that this is because why dropout does not help training. In that case, otherwise, it introduces undesirable noise to the models. We will include figures for the cases where dropout does not help. Reviewer 2 - the comparison made in the paper between the proposed network and BP-MLL is not valid: influences of architecture, choice of transfer functions and loss are all mixed. In the end of the 'Plateaus' paragraph, it is suggested that ReLUs allow to outperform BP-MLL but no experience with BP-MLL with ReLU has been conducted. Perhaps the loss of BP-MLL (PWE) could be as efficient as cross entropy with ReLU. Is BP-MLL used as a benchmark as a network or simply as a loss (exponential)? What are the results with hinge loss?: We used BP-MLL as it has been proposed where the hidden units and the output units are tanh, and the error function is the pairwise error function. In comparison of ReLU and tanh in the hidden layer of BP-MLL, tanh often performs better than ReLU. We will add additional experimental results with respect to the type of hidden units in BP-MLL. - The curves form Figure 2 are more original but I don’t agree with its creation. What justifies that what is observed with such a weird networks (a single hidden unit) can transfer to more generic feed-forward NNs?: It is hard to draw error as a function of parameters in general neural networks as in Figure 2 because the number of all possible configurations of parameters in NN equals to '1 + the number of hidden layers' assuming that each hidden layer has a single unit. - the conclusions regarding the ”plateaus” are not really obvious: it seems that there are plateaus for all settings.: We can only say given all the same settings such as input data, output targets and weight initialization methods, a curve is more steep than the other. What we want to show in Figure 2 is that the use of different cost functions yields the different curves where PWE consists of larger plateaus compared to CE. Obviously, using the ReLU unit in the hidden layer gives rise to absolutely flat region as in Figure 2(b), which implies it is impossible to escape from such a region once the hidden unit’s activation is determined to zero. That is a downside of the use of ReLUs as Reviewer 1 pointed out. - Couldn’t it be interesting to try to learn thresholds using the class probabilities outputted by the network?: To learn instance-wise threshold predictor, the class probabilities are used to estimate the best performing threshold on training instances from which we learn the threshold predictor. If one wants to train label-wise threshold predictor, a variant of ScutFBR might be considered [7, 3]. References [1] Krzysztof Dembczynski, Wojciech Kotlowski, and Eyke Hüllermeier. Consistent multilabel ranking through univariate losses. In ICML, 2012. [2] André Elisseeff and Jason Weston. A kernel method for multi-labelled classification. In NIPS, 2001. [3] Rong-En Fan and Chih-Jen Lin. A study on threshold selection for multi-label classification. Technical Report, National Taiwan University, 2007. [4] Wei Gao and Zhi-Hua Zhou. On the consistency of multi-label learning. In COLT, 2011. [5] Grigorios Tsoumakas, Ioannis Katakis, and Ioannis Vlahavas. Random k-Labelsets for Multilabel Classi- fication. IEEE Trans. Knowl. Data Eng., 23(7):1079–1089, 2011. [6] Yiming Yang and Siddharth Gopal. Multilabel classification with meta-level features in a learning-to-rank framework. Machine Learning, pages 1–22, 2011. [7] Yiming Yang. A study of thresholding strategies for text categorization. In SIGIR, 2001. [8] Min-Ling Zhang Min-Ling Zhang and Zhi-Hua Zhou Zhi-Hua Zhou. Multilabel Neural Networks with Applications to Functional Genomics and Text Categorization. IEEE Trans. Knowl. Data Eng., 18(10): 1338–1351, 2006.
YDXrDdbom9YCi
Large-scale Multi-label Text Classification - Revisiting Neural Networks
[ "Jinseok Nam", "Jungi Kim", "Iryna Gurevych", "Johannes Fürnkranz" ]
Large-scale datasets with multi-labels are becoming readily available, and the demand for large-scale multi-label classification algorithm is also increasing. In this work, we propose to utilize a single-layer Neural Networks approach in large-scale multi-label text classification tasks with recently proposed learning techniques. We carried out experiments on six textual datasets with varying characteristics and size, and show that a simple Neural Networks model equipped with recent advanced techniques for Neural Networks components such as an activation layer, optimization, and generalization techniques performs as well as or even outperforms the previous state-of-the-art approaches on large-scale datasets with diverse characteristics.
[ "neural networks", "text classification", "datasets", "available", "demand", "classification algorithm", "work", "text classification tasks", "learning techniques", "experiments" ]
https://openreview.net/pdf?id=YDXrDdbom9YCi
https://openreview.net/forum?id=YDXrDdbom9YCi
alkAlHVICypit
review
1,391,695,620,000
YDXrDdbom9YCi
[ "everyone" ]
[ "anonymous reviewer 9528" ]
ICLR.cc/2014/conference
2014
title: review of Large-scale Multi-label Text Classification - Revisiting Neural Networks review: The paper describes a series of experiments for multi-labeled text classification using neural networks. The classification model is a simple one hidden layer NN integrating rectifier units and dropout. A comparison of this model with baselines (Binary relevance and a ranking NN) is performed on 6 multi-labeled datasets with different characteristics. This is an experimental paper. The NN model is a classical MLP and the only new algorithmic contribution concerns the prediction of decision thresholds for binary decisions. On the other hand, the experimental comparison is extensive and allows the authors to examine the benefits of recent improvements for NNs on the task of text classification. The datasets characteristics are representative of different text classification problems (dataset and vocabulary size, label cardinality). Different loss functions are also used for the evaluation. The paper could then be useful for popularizing NNs for text classification. I am not sure that the BP-MLL is a reference algorithm for learning to rank, and there are probably better candidates. On the other hand it is true that the simple binary framework (BR in the paper) is a strong baseline despite its simplicity, so that the results could be considered as significant. An extension of this work could be to consider large scale problems (both in the vocabulary size and in the number of categories, since several benchmarks are now available with a very large number of classes – see for example the LSHTC challenges). A deeper discussion on the respective complexity of the different approaches and of their behavior when the number of training examples varies (learning curves) would strengthen the paper.
Hq5MgBFOP62-X
OverFeat: Integrated Recognition, Localization and Detection using Convolutional Networks
[ "Michael Mathieu", "Yann LeCun", "Rob Fergus", "David Eigen", "Pierre Sermanet", "Xiang Zhang" ]
We present an integrated framework for using Convolutional Networks for classification, localization and detection. We show how a multiscale and sliding window approach can be efficiently implemented within a ConvNet. We also introduce a novel deep learning approach to localization by learning to predict object boundaries. Bounding boxes are then accumulated rather than suppressed in order to increase detection confidence. We show that different tasks can be learnt simultaneously using a single shared network. This integrated framework is the winner of the localization task of the ImageNet Large Scale Visual Recognition Challenge 2013 (ILSVRC2013), and produced near state of the art results for the detection and classifications tasks. Finally, we release a feature extractor from our best model called OverFeat.
[ "localization", "detection", "overfeat", "integrated recognition", "convolutional networks", "integrated framework", "convolutional networks overfeat", "classification", "multiscale", "window" ]
https://openreview.net/pdf?id=Hq5MgBFOP62-X
https://openreview.net/forum?id=Hq5MgBFOP62-X
AiU-7_Wwg37jx
review
1,391,813,220,000
Hq5MgBFOP62-X
[ "everyone" ]
[ "anonymous reviewer be85" ]
ICLR.cc/2014/conference
2014
title: review of OverFeat: Integrated Recognition, Localization and Detection using Convolutional Networks review: The paper presents a method for object recognition, localization and detection that uses a single convolutional neural network to locate and classify objects in images.  The basic architecture is similar to Krizhevsky’s ImageNet 2012 system, but with modifications to apply it efficiently in a “sliding window” fashion and to produce accurate bounding boxes by training a regressor to predict the precise position of the bounding box relative to the sliding window detector.  Numerous tweaks are documented for making this system work well including:  multi-scale evaluation over widely-spaced scales, custom shifting/pooling at the top layers to help compensate for spatial subsampling of the network, per-class regressors to localize objects relative to the input window, and a simple merging procedure to combine the regressed boxes into final detections.  This method is shown to achieve state-of-the-art results on ImageNet localization and detection tasks while also being relatively fast. Overall, this paper presents a very thorough accounting of a fully functioning detection pipeline based on convnets that is the top performer on one of the toughest vision tasks around.  One of the challenges with reporting results like this is to make them reproducible, and I think this paper includes all of the details that a researcher would need to do so, which is really excellent.   There is currently a lot of work on detection architectures (e.g., from Erhan et al.) but this one is fairly complete and high-performing.  So, while there aren’t huge new ideas here, considering the depth of experiments and the cornucopia of tricks for maximizing performance the work looks very worthwhile. Pros: End-to-end training of the entire detection and localization pipeline.  The decomposition into 3 clean stepping stones (classifier, localizer, detector) is a nice strategy. State-of-the-art detection performance on Image-Net. Cons: Somewhat “specialized” convnet architecture to deal with subsampling issues and multi-scale (e.g., it is mentioned that the detector of Fig 11 also uses multiple scales for context) Other: The text is very detailed in order to make the system reproducible.  This is great, but perhaps some of the tables and minor notes [parameter settings, etc.] could be moved to an appendix to tighten up the text.
Hq5MgBFOP62-X
OverFeat: Integrated Recognition, Localization and Detection using Convolutional Networks
[ "Michael Mathieu", "Yann LeCun", "Rob Fergus", "David Eigen", "Pierre Sermanet", "Xiang Zhang" ]
We present an integrated framework for using Convolutional Networks for classification, localization and detection. We show how a multiscale and sliding window approach can be efficiently implemented within a ConvNet. We also introduce a novel deep learning approach to localization by learning to predict object boundaries. Bounding boxes are then accumulated rather than suppressed in order to increase detection confidence. We show that different tasks can be learnt simultaneously using a single shared network. This integrated framework is the winner of the localization task of the ImageNet Large Scale Visual Recognition Challenge 2013 (ILSVRC2013), and produced near state of the art results for the detection and classifications tasks. Finally, we release a feature extractor from our best model called OverFeat.
[ "localization", "detection", "overfeat", "integrated recognition", "convolutional networks", "integrated framework", "convolutional networks overfeat", "classification", "multiscale", "window" ]
https://openreview.net/pdf?id=Hq5MgBFOP62-X
https://openreview.net/forum?id=Hq5MgBFOP62-X
QV0KQRSaXWk1w
review
1,392,094,080,000
Hq5MgBFOP62-X
[ "everyone" ]
[ "anonymous reviewer c233" ]
ICLR.cc/2014/conference
2014
title: review of OverFeat: Integrated Recognition, Localization and Detection using Convolutional Networks review: The authors present a system that is capable of classification, localization and detection, to the advantage of all three. The starting point of Krizhevsky’s 2012 work is adapted to produce an architecture capable of simultaneously solving all tasks, with the feature extraction component being shared across tasks. They propose to scan the entire convolutional net across the entire image at several scales, reusing the relevant computations from overlapping regions already processed, making both classification and (per-class) bounding box predictions at each and a clever scheme for aggregating evidence for the bounding box across predictions, and lots of other tricks. They evaluate on ILSVRC 2012 and 2013 tasks with excellent results. Novelty: low-medium Quality: high Pros - This is an excellent piece of applied vision research. The results on ILSVRC speak for themselves, really. - It brings to light some potentially non-obvious (to a convolutional networks neophyte) advantages of convolutional nets for tasks like this, namely that dense application - Details are copiously documented: this is an excellent example of authors paying serious mind to the reproducibility of their work, a tendency that is sorely lacking in computer vision and machine learning in general. Please keep up the good work in future publications. Cons - There isn’t a lot of methodological novelty, although there is some in the tricks employed to deal with subsampling, etc. (to my knowledge, these are novel). That said, it is a tour-de-force application paper, so I don’t see this as a serious drawback. - The only serious barrier to publication I see is clarity of exposition in certain parts. - Some discussion of not just which hyperparameters were chosen but how and why, including some rationale for departures from Krizhevsky’s architecture, would be nice (e.g. the learning rate schedule, your choice to drop contrast normalization, non-overlapping pooling regions, etc.). Providing the details as statement of fact is good, but insights into how you made some of these decisions would make for more compelling reading, especially to those familiar with the Krizhevsky work. - The organization of the paper could also use work: essential ideas should be distilled into an 8ish page manuscript and the details (which, as I said, are an extremely positive feature of this paper) relegated to an appendix. Detailed comments: - If you have some rough idea of the relative importance of horizontal flipping and other tricks described in 3.3, it would be useful to know. I don’t expect an exhaustive ablative analysis, but even an informal statement as to which elements seem to be the most critical would be interesting. - The exposition on “dense application” and why this is a computational savings is less clear than it could be (section 3.5). Basically what you are trying to get across, I think, is that applying a convolutional net to every PxP window of an MxN image, where P << M and P << N, can be performed efficiently by convolving each layer of filters and doing the pooling and so on with the entire image at once, reusing computation for overlapping window regions, and thus it is much more efficient than if you had some arbitrary black box that you had to apply at every window location and reuse no computation whatsoever. However the text was very unclear on this point, and a reader with less background may not understand what you mean (which would be a shame, as this is a very important point, practically speaking). I’m sure I’ve heard this idea spoken of before -- is this the first time it’s appeared in print? If not, I’d make sure to include a citation.
Hq5MgBFOP62-X
OverFeat: Integrated Recognition, Localization and Detection using Convolutional Networks
[ "Michael Mathieu", "Yann LeCun", "Rob Fergus", "David Eigen", "Pierre Sermanet", "Xiang Zhang" ]
We present an integrated framework for using Convolutional Networks for classification, localization and detection. We show how a multiscale and sliding window approach can be efficiently implemented within a ConvNet. We also introduce a novel deep learning approach to localization by learning to predict object boundaries. Bounding boxes are then accumulated rather than suppressed in order to increase detection confidence. We show that different tasks can be learnt simultaneously using a single shared network. This integrated framework is the winner of the localization task of the ImageNet Large Scale Visual Recognition Challenge 2013 (ILSVRC2013), and produced near state of the art results for the detection and classifications tasks. Finally, we release a feature extractor from our best model called OverFeat.
[ "localization", "detection", "overfeat", "integrated recognition", "convolutional networks", "integrated framework", "convolutional networks overfeat", "classification", "multiscale", "window" ]
https://openreview.net/pdf?id=Hq5MgBFOP62-X
https://openreview.net/forum?id=Hq5MgBFOP62-X
11m21yPQdjv49
review
1,393,276,320,000
Hq5MgBFOP62-X
[ "everyone" ]
[ "David Eigen" ]
ICLR.cc/2014/conference
2014
review: We would like to thank all the reviewers for their comments and feedback. We have integrated many of the suggestions into a new version (v4) of the paper, and are continuing to make revisions. This version has been submitted to ArXiv and will appear later today, on Tue, 25 Feb 2014 01:00:00 GMT. In response to your comments: - 'some of the tables and minor notes ... could be moved to an appendix'; - 'essential ideas should be distilled ... and the details ... relegated to an appendix' Thanks for these suggestions. We are currently working to factor out the details and make the paper more succinct. Some progress on this has already been made in the newest version (v4), and we are now working on another revision with further editing. - 'exposition on “dense application” and why this is a computational savings is less clear than it could be' - 'clarity of exposition in certain parts' This section has been updated in the new version (v4), and should be clearer. Many other parts of the text have also been revised, and we are continuing to make edits for this. - 'Some discussion of not just which hyperparameters were chosen but how and why' - 'rough idea of the relative importance of horizontal flipping and other tricks described in 3.3, it would be useful to know' - 'more detail on computational efficiency/accuracy compromise' Thanks for the suggestions. We will try to discuss some of these questions more in the next revision (v5) of the paper (this has not yet been included in v4). We do not have systematic comparisons for many of these, though, but further studies of them could make good followup work. - 'results on PASCAL' Thanks for the suggestion. This is likely not something we will be able to get done for this paper, but we agree it would be interesting to see, and may look at this in the future.
Hq5MgBFOP62-X
OverFeat: Integrated Recognition, Localization and Detection using Convolutional Networks
[ "Michael Mathieu", "Yann LeCun", "Rob Fergus", "David Eigen", "Pierre Sermanet", "Xiang Zhang" ]
We present an integrated framework for using Convolutional Networks for classification, localization and detection. We show how a multiscale and sliding window approach can be efficiently implemented within a ConvNet. We also introduce a novel deep learning approach to localization by learning to predict object boundaries. Bounding boxes are then accumulated rather than suppressed in order to increase detection confidence. We show that different tasks can be learnt simultaneously using a single shared network. This integrated framework is the winner of the localization task of the ImageNet Large Scale Visual Recognition Challenge 2013 (ILSVRC2013), and produced near state of the art results for the detection and classifications tasks. Finally, we release a feature extractor from our best model called OverFeat.
[ "localization", "detection", "overfeat", "integrated recognition", "convolutional networks", "integrated framework", "convolutional networks overfeat", "classification", "multiscale", "window" ]
https://openreview.net/pdf?id=Hq5MgBFOP62-X
https://openreview.net/forum?id=Hq5MgBFOP62-X
yGNSGHgls9Irb
review
1,391,644,680,000
Hq5MgBFOP62-X
[ "everyone" ]
[ "Liangliang Cao" ]
ICLR.cc/2014/conference
2014
review: It is a very nice work and I enjoy reading the paper. Now I believe the era of 'deformable part model' (by Felzenszwalb, McAllester, Ramanan et al) in CV detection will find its successor soon. We will witness another revolution in the field of object detection after talking about DPM for 5 years. One comment of the paper: Is it possible to know the results of applying Overfeat to PASCAL detection dataset? I am also interested the comparison with NEC's Regionlets (No.2 place in ImageNet 2013 detection) on PASCAL. But even without results on PASCAL, this paper still deserves an acceptance from any conference.
Hq5MgBFOP62-X
OverFeat: Integrated Recognition, Localization and Detection using Convolutional Networks
[ "Michael Mathieu", "Yann LeCun", "Rob Fergus", "David Eigen", "Pierre Sermanet", "Xiang Zhang" ]
We present an integrated framework for using Convolutional Networks for classification, localization and detection. We show how a multiscale and sliding window approach can be efficiently implemented within a ConvNet. We also introduce a novel deep learning approach to localization by learning to predict object boundaries. Bounding boxes are then accumulated rather than suppressed in order to increase detection confidence. We show that different tasks can be learnt simultaneously using a single shared network. This integrated framework is the winner of the localization task of the ImageNet Large Scale Visual Recognition Challenge 2013 (ILSVRC2013), and produced near state of the art results for the detection and classifications tasks. Finally, we release a feature extractor from our best model called OverFeat.
[ "localization", "detection", "overfeat", "integrated recognition", "convolutional networks", "integrated framework", "convolutional networks overfeat", "classification", "multiscale", "window" ]
https://openreview.net/pdf?id=Hq5MgBFOP62-X
https://openreview.net/forum?id=Hq5MgBFOP62-X
yuF4yCcCBOna3
review
1,392,859,140,000
Hq5MgBFOP62-X
[ "everyone" ]
[ "anonymous reviewer 4a93" ]
ICLR.cc/2014/conference
2014
title: review of OverFeat: Integrated Recognition, Localization and Detection using Convolutional Networks review: This paper demonstrates a convolutional architecture for simultaneously detecting and localizing ImageNet objects. It is the winner of localization task of ILSVRCS2013 competition and that by itself makes it interesting. They implement a combined architecture where first 5 layers share features compute features that are used for both classification and localization tasks. A lot of detail goes into constructing an architecture in a way as to make network windows aligned with the objeect. They use 6 scales increased in steps between 1.08 and 1.27 (why this arbitrary choice of steps?) along with 1 pixel offsets and a skip feature connections at the top to prevent the stride from being too large. This is a solid paper which summarizes significant body of work in neural network design, which makes its relevance to ICLR high. Some suggestions: -- I wish the authors gave more detail on computational efficiency/accuracy compromise since that needs to be considered when running in an industrial setting. For instance, coarse vs fine stride seems to provide 1% absolute improvement, while requiring 9x more computation at the lowest level. How much does that affect total computation? This could be done by adding an extra column 'FLOPS' to Table 5.
PtRd6ZOVAm7Lv
Sparse, complex-valued representations of natural sounds learned with phase and amplitude continuity priors
[ "Wiktor Mlynarski" ]
Complex-valued sparse coding is a data representation which employs a dictionary of two-dimensional subspaces, while imposing a sparse, factorial prior on complex amplitudes. When trained on a dataset of natural image patches, it learns phase invariant features which closely resemble receptive fields of complex cells in the visual cortex. Features trained on natural sounds however, rarely reveal phase invariance and capture other aspects of the data. This observation is a starting point of the present work. As its first contribution, it provides an analysis of natural sound statistics by means of learning sparse, complex representations of short speech intervals. Secondly, it proposes priors over the basis function set, which bias them towards phase-invariant solutions. In this way, a dictionary of complex basis functions can be learned from the data statistics, while preserving the phase invariance property. Finally, representations trained on speech sounds with and without priors are compared. Prior-based basis functions reveal performance comparable to unconstrained sparse coding, while explicitly representing phase as a temporal shift. Such representations can find applications in many perceptual and machine learning tasks.
[ "representations", "sparse", "natural sounds", "phase", "amplitude continuity priors", "dictionary", "priors", "sparse coding", "data representation", "subspaces" ]
https://openreview.net/pdf?id=PtRd6ZOVAm7Lv
https://openreview.net/forum?id=PtRd6ZOVAm7Lv
qq1NUprOkGxvM
comment
1,392,737,640,000
3aJi3mtYiya_u
[ "everyone" ]
[ "Wiktor Młynarski" ]
ICLR.cc/2014/conference
2014
reply: Thank you for your review and comments. If one thinks about the analytic form of complex valued basis functions (such as a complex Gabor, for instance) then real and imaginary vectors indeed form Hilbert pairs. However, the sparse complex valued coding model does not make this assumption in any way. It attempts to represent the data as a linear combination of pairs of vectors which span 2-dimensional subspaces while making the amplitudes sparse and independent. No prior assumptions on the form of vectors is made. In case of natural images, the phase invariance emerges 'for free', as a reflection of the data statistics. In other signal domains it is not necessarily the case, as the present work shows. The optimization algorithm seems to be working correctly. Polar and cartesian coordinates are after all equivalent data descriptions. Additionally, these results are not the first ones which observe that complex valued basis functions learned on natural sounds are not phase invariant (please, see the response to the first review). As a control experiment I have learned a complex dictionary without priors from natural images (new figure 5 C). Results do not qualitatively differ from previously obtained, which suggests that the algorithm works properly. The phase slowness prior is a non-dominating penalty term, scaled by the gamma parameter, which is smaller than 1. If higher frequencies are present in the data, they will be captured, and the prior will only bias basis functions towards ones with the smoothly changing phase (of constant frequency). As the results show - this is what happens. One can interpret this prior as a penalty of variance of the temporal derivative [26]. In such interpretation it becomes more clear that all frequencies are allowed. I have added an exploanatory sentence in the text. Regarding the time-frequency tiling. I am not fully sure what would 'make sense' - the obtained result is a representation learned from the data. The arches reflect temporal frequency variation of the basis functions (see figure 1 B, second row - phases are monotonic, but rather piecewise linear functions of time). A possible explanation is that real and imaginary vectors tend to diverge from each other (as in the unconstrained model), but the prior forces them to stay close on the time frequency plane. I have commented on that in the new version. As mentioned in the text, subspace-based models (such as complex sparse coding) learn invariances, which can not be captured by the linear models you mention. Having a phase invariant representation allows to separate amplitude and phase information, which can not be done using a linear sparse coding algorithm (at least not easily). In tasks such as sound localization, separation of these parameters is crucial. As the introduction and conclusions section discuss, learned dictionaries are adapted to the data and at the same time make explicit aspects which are not captured by simpler models. From my point of view, this is the most important gain.
PtRd6ZOVAm7Lv
Sparse, complex-valued representations of natural sounds learned with phase and amplitude continuity priors
[ "Wiktor Mlynarski" ]
Complex-valued sparse coding is a data representation which employs a dictionary of two-dimensional subspaces, while imposing a sparse, factorial prior on complex amplitudes. When trained on a dataset of natural image patches, it learns phase invariant features which closely resemble receptive fields of complex cells in the visual cortex. Features trained on natural sounds however, rarely reveal phase invariance and capture other aspects of the data. This observation is a starting point of the present work. As its first contribution, it provides an analysis of natural sound statistics by means of learning sparse, complex representations of short speech intervals. Secondly, it proposes priors over the basis function set, which bias them towards phase-invariant solutions. In this way, a dictionary of complex basis functions can be learned from the data statistics, while preserving the phase invariance property. Finally, representations trained on speech sounds with and without priors are compared. Prior-based basis functions reveal performance comparable to unconstrained sparse coding, while explicitly representing phase as a temporal shift. Such representations can find applications in many perceptual and machine learning tasks.
[ "representations", "sparse", "natural sounds", "phase", "amplitude continuity priors", "dictionary", "priors", "sparse coding", "data representation", "subspaces" ]
https://openreview.net/pdf?id=PtRd6ZOVAm7Lv
https://openreview.net/forum?id=PtRd6ZOVAm7Lv
VzhSziEbS7fbY
comment
1,392,737,580,000
GdpzZnQU7qG5f
[ "everyone" ]
[ "Wiktor Młynarski" ]
ICLR.cc/2014/conference
2014
reply: Thank you for your comments and suggestions. Firstly, I agree that the paper does not introduce any fundamentally novel method. What may be considered as technical novelties are: - the fact that priors are placed on basis functions - smoothness priors are placed on both: phases and amplitudes (Cadieu et al, penalized only amplitude dynamics, not phase) - an additional term in the phase penalty, which enforces it's monotonicity. The purpouse of the paper was not, however to introduce a novel method - it was to learn representations of a certain class of signals (natural sounds) and to study the properties of obtained features. Such representations may find applications in tasks which operate on sound data. That is why I do not think that the present paper should be directly compared with the hierarchical model introduced by Cadieu et al. especially in the context of a method novelty. Cadieu and Olshausen, constructed a hierarchical representation of natural videos with a purpouse of extracting motion invariances. The present paper learns single layer representations of natural sounds - this is a fundamental difference. It is an interesting question, in which (statistical) sense sounds are different from images (please, see my response to the previous review). After all, physcially they are very different stimuli. Suggested by the results presented in [21, 22] I have introduced a brief analysis of harmonic relationships between real and imaginary vectors. A full answer to that problem requires extensive research. I have performed two kinds of quantitative analysis: denoising and comparison of coefficient entropies. The analysis was performed as in cited literature [14, 28]. Due to the space constraints I have not presented more details. Personally, I find presented results conclusive. The work by Karklin et al you cite analyzed the learned representation also by performing a denoising task. I do not fully understand, how should I compare the present study to their results, as you suggest. I understand that by compression experiment you mean the comparison of coefficient entropies. Of course, a trivial solution would be a basis set which yields 0 entropies, while not being able to reconstruct the data at all (an 'infinite' reconstruction error). As the denoising experiment shows, this is not the case for any of the learned bases. Entropy estimates give therefore an idea of a relative coding cost, and according to Shannon's source coding theorem, the model yielding the lowest coding cost is closest to the true data distribution. For a detailed discussion please refer to [14, 15]. The entropy values should be considered together with the denoising performance. Bounding the phase derivative from below is also a possible way to enforce the phase monotonicity. However, it would require introduction of another parameter - the bound itself. The proposed prior does not require any additional parametrization. I have modified the description of equation 9 to address your suggestion. For convenience, gamma and beta lie in the [0, 1] interval. The gradient moduli (before multiplying by gamma, beta and the step size) can be much larger than 1. In such a case, prior strength parameters affect the gradient step very weakly. That is why gradient terms are firstly normalized to have the same length, and then multiplied by gamma and beta.
PtRd6ZOVAm7Lv
Sparse, complex-valued representations of natural sounds learned with phase and amplitude continuity priors
[ "Wiktor Mlynarski" ]
Complex-valued sparse coding is a data representation which employs a dictionary of two-dimensional subspaces, while imposing a sparse, factorial prior on complex amplitudes. When trained on a dataset of natural image patches, it learns phase invariant features which closely resemble receptive fields of complex cells in the visual cortex. Features trained on natural sounds however, rarely reveal phase invariance and capture other aspects of the data. This observation is a starting point of the present work. As its first contribution, it provides an analysis of natural sound statistics by means of learning sparse, complex representations of short speech intervals. Secondly, it proposes priors over the basis function set, which bias them towards phase-invariant solutions. In this way, a dictionary of complex basis functions can be learned from the data statistics, while preserving the phase invariance property. Finally, representations trained on speech sounds with and without priors are compared. Prior-based basis functions reveal performance comparable to unconstrained sparse coding, while explicitly representing phase as a temporal shift. Such representations can find applications in many perceptual and machine learning tasks.
[ "representations", "sparse", "natural sounds", "phase", "amplitude continuity priors", "dictionary", "priors", "sparse coding", "data representation", "subspaces" ]
https://openreview.net/pdf?id=PtRd6ZOVAm7Lv
https://openreview.net/forum?id=PtRd6ZOVAm7Lv
PgiHEe9RnQgSt
review
1,391,478,120,000
PtRd6ZOVAm7Lv
[ "everyone" ]
[ "anonymous reviewer 69a6" ]
ICLR.cc/2014/conference
2014
title: review of Sparse, complex-valued representations of natural sounds learned with phase and amplitude continuity priors review: The paper describes a sparse coding model with complex valued basis functions. For training, it proposes to minimize reconstruction error plus penalty terms that encourage the amplitudes and phases of the basis functions to be smooth. At first sight the model seems reminiscent of Cadieu, Olshausen (2012) [4]. But in that work, it is the coefficients that are penalized to have smooth amplitudes over time, whereas here, it is the basis functions themselves that are penalized to be smooth. The model is applied to time-domain speech signals (one-dimensional data). The paper compares the results of complex valued sparse coding with smoothness penalties versus complex valued sparse coding without. The comparison shows that with penalties, basis functions seem to be more localized and filters within a pair tend to have quadrature relations. Without penalties they do not seem to. I find this somewhat surprising because I would have thought that minimizing reconstruction error (plus orthonormalizing filters within each pair as suggested) would already achieve this, like it does in the case of images. The paper does suggest that sound data is fundamentally different from image data. I am curious what it is about sound data that causes it to require this extra machinery for learning complex basis functions. It would be very good to have actual results on images as a control. This would also help disentangle two topics that are hard to separate in the paper, which are 1) fundamental differences in sound data versus image data, and 2) learning complex bases with and without smoothness penalties. I am wondering in what way the smoothness penalties are related to weight decay, or in what way they may just help find better local optima. It seems like this would be easy to check by initializing model A (no penalties) with model B (with penalties). The title says 'natural sounds' but as far as I can tell, all experiments were done on a speech dataset. I'm not sure I completely agree with the statement that speech is a good enough proxy for natural sounds in general. There are a lot of typos (e.g., 'Gramm-Schmidt', 'strucutre', 'analyzis').
PtRd6ZOVAm7Lv
Sparse, complex-valued representations of natural sounds learned with phase and amplitude continuity priors
[ "Wiktor Mlynarski" ]
Complex-valued sparse coding is a data representation which employs a dictionary of two-dimensional subspaces, while imposing a sparse, factorial prior on complex amplitudes. When trained on a dataset of natural image patches, it learns phase invariant features which closely resemble receptive fields of complex cells in the visual cortex. Features trained on natural sounds however, rarely reveal phase invariance and capture other aspects of the data. This observation is a starting point of the present work. As its first contribution, it provides an analysis of natural sound statistics by means of learning sparse, complex representations of short speech intervals. Secondly, it proposes priors over the basis function set, which bias them towards phase-invariant solutions. In this way, a dictionary of complex basis functions can be learned from the data statistics, while preserving the phase invariance property. Finally, representations trained on speech sounds with and without priors are compared. Prior-based basis functions reveal performance comparable to unconstrained sparse coding, while explicitly representing phase as a temporal shift. Such representations can find applications in many perceptual and machine learning tasks.
[ "representations", "sparse", "natural sounds", "phase", "amplitude continuity priors", "dictionary", "priors", "sparse coding", "data representation", "subspaces" ]
https://openreview.net/pdf?id=PtRd6ZOVAm7Lv
https://openreview.net/forum?id=PtRd6ZOVAm7Lv
GdpzZnQU7qG5f
review
1,391,729,460,000
PtRd6ZOVAm7Lv
[ "everyone" ]
[ "anonymous reviewer 92c8" ]
ICLR.cc/2014/conference
2014
title: review of Sparse, complex-valued representations of natural sounds learned with phase and amplitude continuity priors review: A sparse coding model of natural sounds (speech) is proposed. The signal is represented by a complex sparse coding problem with smoothness priors on both amplitude and phase. Learning and inference proceeds as in standard sparse coding. The method is analyzed in terms of statistics of complex pairs filters as well as denoising. The method is not very novel. Complex sparse coding was already introduced in the past and the sparsity priors on the amplitudes and coefficients are a straightforward extension (or simplification compared to the work by Cadieu et al.). Pros: - interesting application - fairly clear written paper Cons: - insights may be good but I probably did not fully understood them. Why are sounds inherently different from images? Is it an artifact of how the experimental set up? Without sparsity/smoothness constraints, the problem is clearly underdetermined and therefore filters do not necessarily converge to quadrature pairs. - what is the contribution of this work compared to Cadieu et al? They had an extra layer, but the basic idea of smoothness of phase and amplitude is present also in that work. - empirical validation is not sufficient because: - more quantitative results would be beneficial to assess the benefits of this model. For instance, the authors may want to compare and cite: Y. Karklin, C. Ekanadham, and E. P. Simoncelli, Hierarchical spike coding of sound, Adv in Neural Information Processing Systems (NIPS), 2012 - some parts need clarification - eq. 9: why is this a good choice? wouldn’t it be better to have it bounded below? - sec. 2.2 why rescaling the gradients when there are beta and gamma? - in the compression experiment, shouldn’t the reconstruction error be taken into account? Overall, this is interesting work. However, several clarifications are required in order to better assess novelty and to understand the method. Also, the empirical validation should be strengthened.
PtRd6ZOVAm7Lv
Sparse, complex-valued representations of natural sounds learned with phase and amplitude continuity priors
[ "Wiktor Mlynarski" ]
Complex-valued sparse coding is a data representation which employs a dictionary of two-dimensional subspaces, while imposing a sparse, factorial prior on complex amplitudes. When trained on a dataset of natural image patches, it learns phase invariant features which closely resemble receptive fields of complex cells in the visual cortex. Features trained on natural sounds however, rarely reveal phase invariance and capture other aspects of the data. This observation is a starting point of the present work. As its first contribution, it provides an analysis of natural sound statistics by means of learning sparse, complex representations of short speech intervals. Secondly, it proposes priors over the basis function set, which bias them towards phase-invariant solutions. In this way, a dictionary of complex basis functions can be learned from the data statistics, while preserving the phase invariance property. Finally, representations trained on speech sounds with and without priors are compared. Prior-based basis functions reveal performance comparable to unconstrained sparse coding, while explicitly representing phase as a temporal shift. Such representations can find applications in many perceptual and machine learning tasks.
[ "representations", "sparse", "natural sounds", "phase", "amplitude continuity priors", "dictionary", "priors", "sparse coding", "data representation", "subspaces" ]
https://openreview.net/pdf?id=PtRd6ZOVAm7Lv
https://openreview.net/forum?id=PtRd6ZOVAm7Lv
__7eb-mkwrzUv
comment
1,392,737,520,000
PgiHEe9RnQgSt
[ "everyone" ]
[ "Wiktor Młynarski" ]
ICLR.cc/2014/conference
2014
reply: Thank you for your review and interesting comments. In a new paper version I have introduced suggested modifications and related to points you make. As you mention, the results of training complex-valued sparse codes on natural sounds can be unexpected if one keeps the intuition from natural image statistics. One of the main messages of the paper is that this intuition does not necessarily translate between signal domains. This perhaps should not be very surprising, after all those signals arise as a result of fundamentally different physical processes. This is not the first paper which observes that same statistical models trained on natural sounds and images yield different results [22, 25] (please note that I use literature indexing according to the newest version of the paper). It has been suggested that statistical models (such as topographic ICA, which is closely related to complex valued sparse coding) capture non-local cross-frequency correlations of natural sounds [21, 22]. Correlations of natural image patches are local, and that is why dictionaries trained on those two signals reveal very different structure. In the new version of the paper, I have included analysis of harmonic relationships between peaks of the basis function spectra. This may be an initial explanation why learned basis functions are not phase invariant and have different frequency peaks in their real and imaginary parts. It would be hard to perform the comparison of results obtained on images and sounds. Priors introduced here are defined over one-dimensional temporal domain and are placed on basis functions. When learning natural image representations, basis functions capture spatial, not temporal relationships. Additionally they are two-dimensional, therefore the 'temporal slowness' penalty should be transformed into 'spatial smoothness' penalty and the relationship between those is non-obvious. Differences between images and sounds are, however not a fundamental focus of the current paper - it is the learning of sound representations. As a control experiment, I ran the unconstrained algorithm also on natural images. Resulting exemplary basis functions are now depicted on figure 5 C. Non-penalized basis functions form most probably a more efficient representation of the data. This can be inferred by looking at their performance in a denoising task and coefficient entropies. I have performed the experiment you suggested (using penalized basis as initial conditions for non-penalized learning) and included the results in the new version. After 30000 iterations of learning without smoothness priors, phase invariant basis functions deviate from the quadrature pair form (see new figure 5 A and B). This suggests that quadrature solutions do not constitute better local optima than unconstrained ones. As an additional control experiment I have learned a complex dictionary without priors from natural images (new figure 5 C). Results do not qualitatively differ from previously obtained. I also understand your concern regarding the use of speech as a proxy for general natural sounds. As long as speech does not include all possible acoustic structures present in the auditory environment, it contains both harmonic and non-harmonic features. Speech has been used before as a natural sound representation [1,5,13,19] and for those reasons I decided to used it here. Results obtained using different classes of natural sounds yield qualitatitvely similar results, and they were not included for simplicity and because of space constraints. I have also corrected a number of typos in the new version.
PtRd6ZOVAm7Lv
Sparse, complex-valued representations of natural sounds learned with phase and amplitude continuity priors
[ "Wiktor Mlynarski" ]
Complex-valued sparse coding is a data representation which employs a dictionary of two-dimensional subspaces, while imposing a sparse, factorial prior on complex amplitudes. When trained on a dataset of natural image patches, it learns phase invariant features which closely resemble receptive fields of complex cells in the visual cortex. Features trained on natural sounds however, rarely reveal phase invariance and capture other aspects of the data. This observation is a starting point of the present work. As its first contribution, it provides an analysis of natural sound statistics by means of learning sparse, complex representations of short speech intervals. Secondly, it proposes priors over the basis function set, which bias them towards phase-invariant solutions. In this way, a dictionary of complex basis functions can be learned from the data statistics, while preserving the phase invariance property. Finally, representations trained on speech sounds with and without priors are compared. Prior-based basis functions reveal performance comparable to unconstrained sparse coding, while explicitly representing phase as a temporal shift. Such representations can find applications in many perceptual and machine learning tasks.
[ "representations", "sparse", "natural sounds", "phase", "amplitude continuity priors", "dictionary", "priors", "sparse coding", "data representation", "subspaces" ]
https://openreview.net/pdf?id=PtRd6ZOVAm7Lv
https://openreview.net/forum?id=PtRd6ZOVAm7Lv
3aJi3mtYiya_u
review
1,392,193,380,000
PtRd6ZOVAm7Lv
[ "everyone" ]
[ "anonymous reviewer 01ce" ]
ICLR.cc/2014/conference
2014
title: review of Sparse, complex-valued representations of natural sounds learned with phase and amplitude continuity priors review: This paper shows that imposing a prior over the basis functions in a complex representation of sound results in bases that are closer to hilbert pairs, with smooth amplitude envelope and linear phase precession. It is not clear why imposing the prior directly on the basis functions is necessary. If you think of the complex pair as a phase-shiftable basis function, then it would make sense for the real and imaginary parts to be related by hilbert transform. It makes me wonder whether the optimization was done correctly in inferring the sparse amplitudes - I.e., the phase must be allowed to steer to the optimal position, yielding a sparse representation. It appears the gradients were computed with respect to the real and imaginary parts of the coefficients, rather than the amplitude and phase, which may be why the phase is not being properly inferred. The slowness prior on the phase doesn't make sense - this would bias the bases toward low frequencies, no? Some comment seems warranted. The learned tiling in time-frequency doesn't make much sense. What is causing the arching pattern? It's not clear. Most of all, it's not clear what we gain from this representation beyond previous attempts to learn a sparse representation of sound (Smith & Lewicki). It would have been nice to compare coding efficiency and so forth against a purely real (vs. complex) representation.
7Y52YHDS2X7ae
Zero-Shot Learning by Convex Combination of Semantic Embeddings
[ "Tomas Mikolov", "Andrea Frome", "Samy Bengio", "Jonathon Shlens", "Yoram Singer", "Greg S. Corrado", "Jeffrey Dean", "Mohammad Norouzi" ]
Several recent publications have proposed methods for mapping images into continuous semantic embedding spaces. In some cases the semantic embedding space is trained jointly with the image transformation, while in other cases the semantic embedding space is established independently by a separate natural language processing task, and then the image transformation into that space is learned in a second stage. Proponents of these image embedding systems have stressed their advantages over the traditional n-way classification framing of image understanding, particularly in terms of the promise of zero-shot learning -- the ability to correctly annotate images of previously unseen object categories. Here we propose a simple method for constructing an image embedding system from any existing n-way image classifier and any semantic word embedding model, which contains the n class labels in its vocabulary. Our method maps images into the semantic embedding space via convex combination of the class label embedding vectors, and requires no additional learning. We show that this simple and direct method confers many of the advantages associated with more complex image embedding schemes, and indeed outperforms state of the art methods on the ImageNet zero-shot learning task.
[ "learning", "convex combination", "semantic embedding space", "images", "cases", "image transformation", "image", "advantages", "simple", "semantic embeddings" ]
https://openreview.net/pdf?id=7Y52YHDS2X7ae
https://openreview.net/forum?id=7Y52YHDS2X7ae
LyRby_-q2onfK
review
1,392,558,540,000
7Y52YHDS2X7ae
[ "everyone" ]
[ "anonymous reviewer 8598" ]
ICLR.cc/2014/conference
2014
title: review of Zero-Shot Learning by Convex Combination of Semantic Embeddings review: This paper presents a simple but really neat idea of combining semantic word vectors trained on text with a softmax classifier's output. Instead of taking the softmax output as is, it uses its probabilities to weigh the semantic vectors of all the classes which allows the model to assign labels that were not present in the training data. The results are not always better than previous work from the group but in many settings they are. Simple but overall very cool idea.
7Y52YHDS2X7ae
Zero-Shot Learning by Convex Combination of Semantic Embeddings
[ "Tomas Mikolov", "Andrea Frome", "Samy Bengio", "Jonathon Shlens", "Yoram Singer", "Greg S. Corrado", "Jeffrey Dean", "Mohammad Norouzi" ]
Several recent publications have proposed methods for mapping images into continuous semantic embedding spaces. In some cases the semantic embedding space is trained jointly with the image transformation, while in other cases the semantic embedding space is established independently by a separate natural language processing task, and then the image transformation into that space is learned in a second stage. Proponents of these image embedding systems have stressed their advantages over the traditional n-way classification framing of image understanding, particularly in terms of the promise of zero-shot learning -- the ability to correctly annotate images of previously unseen object categories. Here we propose a simple method for constructing an image embedding system from any existing n-way image classifier and any semantic word embedding model, which contains the n class labels in its vocabulary. Our method maps images into the semantic embedding space via convex combination of the class label embedding vectors, and requires no additional learning. We show that this simple and direct method confers many of the advantages associated with more complex image embedding schemes, and indeed outperforms state of the art methods on the ImageNet zero-shot learning task.
[ "learning", "convex combination", "semantic embedding space", "images", "cases", "image transformation", "image", "advantages", "simple", "semantic embeddings" ]
https://openreview.net/pdf?id=7Y52YHDS2X7ae
https://openreview.net/forum?id=7Y52YHDS2X7ae
44qhlhKQh31nZ
review
1,391,687,340,000
7Y52YHDS2X7ae
[ "everyone" ]
[ "anonymous reviewer 936e" ]
ICLR.cc/2014/conference
2014
title: review of Zero-Shot Learning by Convex Combination of Semantic Embeddings review: This paper addresses the problem of zero shot learning for image classification. Like in their recent NIPS2013 work, the authors relies on an embedding representation of the classes inferred from a language model. Their prediction scheme first predicts an embedding vector which linearly combines the vectors representing the n-best prediction among the training classes and then looks for the nearest neighbors of the predicted vector among the test class embedding. The paper reads well and appropriate reference to related work is given. The proposed approach is very simple and yet improve over the DEVISE classifier. I have however a concern regarding the results which either indicates (i) the implementation differs from the paper description, (ii) I missed something. It seems to me that hit@1 for ConSE(1) predicts the embedding of the best prediction of the 'Softmax baseline' over the 1,000 training classes, i.e. f(x) = s(y_0(x,t)) and outputs its nearest neighbor in the search space. When the search space include the training classes, it should output y_0(x,t). This implies that in table 4, the first column should contain hit@1 for ConSE(1) = 55.6. Similarly, the result of hit@1 for ConSE(1) in the (+1K) results should be 0. This is not the case, could you explain/correct? Apart from this technicality, this is a good paper. The approach is simple and improves the state of the art. It might be further improved by a deeper analysis of the errors, possibly grouping them according by type of classes, looking at the accuracy of the convnet on the classes related to the labels or the quality of the corresponding text embeddings.
7Y52YHDS2X7ae
Zero-Shot Learning by Convex Combination of Semantic Embeddings
[ "Tomas Mikolov", "Andrea Frome", "Samy Bengio", "Jonathon Shlens", "Yoram Singer", "Greg S. Corrado", "Jeffrey Dean", "Mohammad Norouzi" ]
Several recent publications have proposed methods for mapping images into continuous semantic embedding spaces. In some cases the semantic embedding space is trained jointly with the image transformation, while in other cases the semantic embedding space is established independently by a separate natural language processing task, and then the image transformation into that space is learned in a second stage. Proponents of these image embedding systems have stressed their advantages over the traditional n-way classification framing of image understanding, particularly in terms of the promise of zero-shot learning -- the ability to correctly annotate images of previously unseen object categories. Here we propose a simple method for constructing an image embedding system from any existing n-way image classifier and any semantic word embedding model, which contains the n class labels in its vocabulary. Our method maps images into the semantic embedding space via convex combination of the class label embedding vectors, and requires no additional learning. We show that this simple and direct method confers many of the advantages associated with more complex image embedding schemes, and indeed outperforms state of the art methods on the ImageNet zero-shot learning task.
[ "learning", "convex combination", "semantic embedding space", "images", "cases", "image transformation", "image", "advantages", "simple", "semantic embeddings" ]
https://openreview.net/pdf?id=7Y52YHDS2X7ae
https://openreview.net/forum?id=7Y52YHDS2X7ae
R2JU265o6CRLV
review
1,392,853,980,000
7Y52YHDS2X7ae
[ "everyone" ]
[ "Mohammad Norouzi" ]
ICLR.cc/2014/conference
2014
review: We thank reviewers for their valuable feedback. We will prepare a new version of the paper shortly to address your comments. R1: ... concern regarding the results which either indicates (i) the implementation differs from the paper description, (ii) I missed something ... R2: ... why the performance of ConSE(1) in hits@1 is not 0 in the +1K setting ... Sumit Chopra: ... ConSE(1) is exactly the same as Softmax baseline ... What's going on? This is a good point related to a technical detail of our implementation. In the Imagenet experiments, following the experimental setup of the DeViSE model, there is not a one-to-one correspondence between the class labels and the word embedding vectors. Rather, because of the way the Imagenet synsets are defined, each class label is associated with several synonym terms, and hence several word vectors. When we perform the mapping from the softmax scores to the continuous embedding space, we average the word vectors associated with each class label, and then linearly combine the average vectors according to the softmax scores. However, when we rank the word vectors to find the k most likely class labels, we search over individual word vectors, without any averaging of the synonym words. Thus, the ConSE(1) might produce an average embedding, which is not the closest vector to any of the word vectors corresponding to the original class label, and this results in a slight difference in the hit@1 scores for ConSE(1) and the softmax baseline in the +1K setting. While other alternatives exist for this part of the algorithm, we intentionally kept the ranking procedure exactly the same as the DeViSE model to perform a direct comparison with DeViSE. We will include this description in the new version of the paper. Sumit Chopra: The evaluation based on excluding and including the 1K classes ... a bit artificial. ... one should have the classes from training set as part of your evaluation classes. And that is the true performance of ConSE(*). The separation of the training and unseen classes enables for a more detailed analysis of the results to obtain better insights into the behavior of the algorithm. Based on these results we concluded that our model overfits to the training class labels, and we need some better regularization mechanism to overcome this behavior. We believe that one can exploit better methods for distinguishing between the 1K training labels and the zero-shot labels, one of which was mentioned in the conclusion. Alternatively, one can collect training labels for a category called 'none-of-training', which includes instances from all categories except the 1K training classes (not reflecting their specific labels). Using examples from this additional category, one can re-train an additional node for the softmax layer to detect examples of the none-of-training category. This helps deciding whether we should perform a zero-shot prediction or we should stick to the original softmax predictions. That said, the experimental results that exclude the 1K training classes report the accuracy that one can hope for in the best case scenario, i.e., when the zero-shot labels can be perfectly distinguished from the 1K training labels. Moreover, these results suggest that our naive way of combining the 1K training labels with the zero-shot labels, i.e., treating them as if they are the same hurts the performance, and it is important to implement a better method for detecting the zero-shot labels. R2: ... it is claimed several times that ConSE can be used with any semantic embeddings. Is this really true? ... Sumit Chopra: ... simple weighted averages of the embeddings of a collection of words is a by-product of the linear model used to train the word embeddings ... We do not argue that any word embedding model used within our framework is equally effective. However, our observation is that we did not fine-tune our algorithm for any specific word embedding representation. This suggests that our algorithm is relatively robust to the specific choice of the word embedding representation, and in fact, we obtained some promising results on the use of a very different word embedding model within exactly the same model. That said, we agree that the topology of the manifold of the word embedding vectors matters, and some word embedding representations might be more suitable for our framework than others. We will reword the paper to clarify that we claim some degree of robustness, and not invariance against the choice of the language model. R2: ... how important is it to normalize the T top probabilities of the combination? ... It is not important to normalize the top T probabilities. In our model, all of the word embedding vectors are unit-norm, and we use cosine similarity between the ConSE's prediction and the unit-norm word vectors for k-nearest neighbor ranking. Thus, even if the ConSE's prediction is not normalized, the ranking of the word vectors does not change. We agree that using the norm of the ConSE's prediction, one can come up with better ways to address the separation of training and unseen classes, but we keep this as a future work. R2: ... how significant are the results. ... perhaps ConSE outperforms DeVISE, but performance are very low ... Can we consider that such poor performance is still actually meaningful and useful in some way? Although the performances do not look great (5% on hits@10), they are somewhat much better in reality. Humans are robust to minor mistakes between very similar concepts (e.g., multiple types of sea lion, or types of hamster), but our flat metric is not good at rating minor mistakes higher than major errors. If the model predicts 'Australian sea lion' but the correct label is 'Steller sea lion', then we get a score of zero according to the flat metric. Figure 1 shows some actual predictions of the model, which demonstrates that even when we fail to return the right answer (most of the time), we often return very reasonable labels. So, if the question is whether this is useful for applications, the answer is clearly we are moving in that direction.
7Y52YHDS2X7ae
Zero-Shot Learning by Convex Combination of Semantic Embeddings
[ "Tomas Mikolov", "Andrea Frome", "Samy Bengio", "Jonathon Shlens", "Yoram Singer", "Greg S. Corrado", "Jeffrey Dean", "Mohammad Norouzi" ]
Several recent publications have proposed methods for mapping images into continuous semantic embedding spaces. In some cases the semantic embedding space is trained jointly with the image transformation, while in other cases the semantic embedding space is established independently by a separate natural language processing task, and then the image transformation into that space is learned in a second stage. Proponents of these image embedding systems have stressed their advantages over the traditional n-way classification framing of image understanding, particularly in terms of the promise of zero-shot learning -- the ability to correctly annotate images of previously unseen object categories. Here we propose a simple method for constructing an image embedding system from any existing n-way image classifier and any semantic word embedding model, which contains the n class labels in its vocabulary. Our method maps images into the semantic embedding space via convex combination of the class label embedding vectors, and requires no additional learning. We show that this simple and direct method confers many of the advantages associated with more complex image embedding schemes, and indeed outperforms state of the art methods on the ImageNet zero-shot learning task.
[ "learning", "convex combination", "semantic embedding space", "images", "cases", "image transformation", "image", "advantages", "simple", "semantic embeddings" ]
https://openreview.net/pdf?id=7Y52YHDS2X7ae
https://openreview.net/forum?id=7Y52YHDS2X7ae
FWp1FD6f1Qa45
review
1,391,734,380,000
7Y52YHDS2X7ae
[ "everyone" ]
[ "Sumit Chopra" ]
ICLR.cc/2014/conference
2014
review: A very interesting paper that proposes an extremely simple technique of mapping words and images to the same latent space for facilitating zero-shot learning. The paper shows how such a simple technique beats the recently proposed and more complicated DeVISE model. A few minor points though: 1. As discussed by the above reader, ConSE(1) is exactly the same as Softmax baseline, especially when we include the 1K classes from the training set. The results table seem to suggest something else. What's going on? 2. The evaluation based on excluding and including the 1K classes from the training set seems to be a bit artificial. In the real deployment scenario one does not have access to this information and hence by default one should have the classes from training set as part of your evaluation classes. And that is the true performance of ConSE(*). 3. Lastly, I don't really buy the author's argument that the proposed model is general and independent of how the word embeddings or the image features are generated. While I agree with the image part (that the model is independent of how the image features are generated), the same is not true for the word embeddings. My sense is that one is able to meaningfully take simple weighted averages of the embeddings of a collection of words is a by-product of the linear model used to train the word embeddings. If one was to use a non-linear model (NN for instance) so that the embeddings lie on a non-linear manifold, things might not work as well. Any thoughts? Otherwise, a very nice paper. Well written too.
7Y52YHDS2X7ae
Zero-Shot Learning by Convex Combination of Semantic Embeddings
[ "Tomas Mikolov", "Andrea Frome", "Samy Bengio", "Jonathon Shlens", "Yoram Singer", "Greg S. Corrado", "Jeffrey Dean", "Mohammad Norouzi" ]
Several recent publications have proposed methods for mapping images into continuous semantic embedding spaces. In some cases the semantic embedding space is trained jointly with the image transformation, while in other cases the semantic embedding space is established independently by a separate natural language processing task, and then the image transformation into that space is learned in a second stage. Proponents of these image embedding systems have stressed their advantages over the traditional n-way classification framing of image understanding, particularly in terms of the promise of zero-shot learning -- the ability to correctly annotate images of previously unseen object categories. Here we propose a simple method for constructing an image embedding system from any existing n-way image classifier and any semantic word embedding model, which contains the n class labels in its vocabulary. Our method maps images into the semantic embedding space via convex combination of the class label embedding vectors, and requires no additional learning. We show that this simple and direct method confers many of the advantages associated with more complex image embedding schemes, and indeed outperforms state of the art methods on the ImageNet zero-shot learning task.
[ "learning", "convex combination", "semantic embedding space", "images", "cases", "image transformation", "image", "advantages", "simple", "semantic embeddings" ]
https://openreview.net/pdf?id=7Y52YHDS2X7ae
https://openreview.net/forum?id=7Y52YHDS2X7ae
KkSQkxO6x4jcH
review
1,392,310,800,000
7Y52YHDS2X7ae
[ "everyone" ]
[ "anonymous reviewer 06d4" ]
ICLR.cc/2014/conference
2014
title: review of Zero-Shot Learning by Convex Combination of Semantic Embeddings review: This paper proposes a method for performing zero-shot learning of an image labeling system. The proposed method is very simple and yet general and efficient: it consistently outperforms the DeVISE system presented recently on the ImageNet benchmark. The paper is nicely written and ConSE is actually so simple that the paper is not too complicated too understand anyway. But I have no problem accepting a paper even if the method is simple, if it proves to be efficient. ConSE appears to be but some questions/comments remain. The method is simple so I would have expected more studies/experiments/discussions to explain its good performance. A simple intuition is given is Section 4.1 with the (funny) 'liger' example. But this could perhaps detailed more. For instance: - it is claimed several times that ConSE can be used with any semantic embeddings. Is this really true? According to the 'liger' example, ConSE works because s(liger) ~ 0.5*s(tiger) + 0.5*s(lion). It is true for the skip-grams embeddings, since it has been shown that such linear relationships (and translations) were existing among those embeddings. I'm not sure that one can claim that all word embeddings work the same way and have such linear relationships. Without such property within the embedding space, would ConSE still perform well? - how important is it to normalize the T top probabilities of the combination? Doing so, they are implicitly calibrated on the train labels, whereas one would like better to calibrate them on train + test labels. In the conclusion, there is an interesting comment regarding the norm of the convex embedding combination, especially when probabilities are not normalized, indicating that it gives a measure of the confidence of the prediction. I feel like the main point of the paper might be there but the paper does not exploit it well. Basically, one of the most difficult problem in zero-shot learning is to detect whether to choose a label among training labels or test labels (before even trying to choose the right one). That's why for me the most interesting (and realistic) experiments of the paper are when train labels are also added to the candidate label sets (+1K setting). These show that the bias towards training labels is big, especially at top-1 (this is not surprising). The intuition about the norm of the convex combination and its connection to confidence seems to be promising to soften this bias, but this is just sketched unfortunately. I wonder why the performance of ConSE(1) in hits@1 is not 0 in the +1K setting. If I understand correctly, the output of ConsE(1) is simply the embedding of the top-predicted train label and hence, the closest according to the cosine distance should be this very train label. Since, no test example is labeled with a train label, it should always be a mistake. On a more general point, it could also be discussed how significant are the results. I mean, perhaps ConSE outperforms DeVISE, but performance are very low (5% of hits@10 in the most general setting). Can we consider that such poor performance is still actually meaningful and useful in some way? Minor: - Tables 1 & 4 are in %, whereas tables 2 & 3. It should be consistent.
4diyarNwq84_Q
Can recursive neural tensor networks learn logical reasoning?
[ "Samuel R. Bowman" ]
Recursive neural network models and their accompanying vector representations for words have seen success in an array of increasingly semantically sophisticated tasks, but almost nothing is known about their ability to accurately capture the aspects of linguistic meaning that are necessary for interpretation or reasoning. To evaluate this, I train a recursive model on a new corpus of constructed examples of logical reasoning in short sentences, like the inference of 'some animal walks' from 'some dog walks' or 'some cat walks,' given that dogs and cats are animals. The results are promising for the ability of these models to capture logical reasoning, but the model tested here appears to learn representations that are quite specific to the templatic structures of the problems seen in training, and that generalize beyond them only to a limited degree.
[ "logical reasoning", "neural tensor networks", "ability", "accompanying vector representations", "words", "success", "array", "sophisticated tasks", "nothing" ]
https://openreview.net/pdf?id=4diyarNwq84_Q
https://openreview.net/forum?id=4diyarNwq84_Q
QQAGQEf1wi5bW
comment
1,391,838,540,000
XjeCUHWZr1Um5
[ "everyone" ]
[ "Sam Bowman" ]
ICLR.cc/2014/conference
2014
reply: Thanks for your comment. I absolutely intend for this paper to describe a reproducible result, and I would hope that the citations and provided code would clarify any details that were omitted in the text. I would appreciate it if you could let me know what details you found unclear. If your concerns are centered on the random noise in the results, and the issues related to early stopping, I do see that as a real issue. I am working to find a way to either encourage the model to converge more reliably, or else to at least report statistics over its behavior across runs. The paper does contain some negative results as you suggest—the model was only successful at some parts of the task—and I would like to explore those results as fully as possible. Is there anything in particular about the reporting of these results that you think could be clearer or more thorough?
4diyarNwq84_Q
Can recursive neural tensor networks learn logical reasoning?
[ "Samuel R. Bowman" ]
Recursive neural network models and their accompanying vector representations for words have seen success in an array of increasingly semantically sophisticated tasks, but almost nothing is known about their ability to accurately capture the aspects of linguistic meaning that are necessary for interpretation or reasoning. To evaluate this, I train a recursive model on a new corpus of constructed examples of logical reasoning in short sentences, like the inference of 'some animal walks' from 'some dog walks' or 'some cat walks,' given that dogs and cats are animals. The results are promising for the ability of these models to capture logical reasoning, but the model tested here appears to learn representations that are quite specific to the templatic structures of the problems seen in training, and that generalize beyond them only to a limited degree.
[ "logical reasoning", "neural tensor networks", "ability", "accompanying vector representations", "words", "success", "array", "sophisticated tasks", "nothing" ]
https://openreview.net/pdf?id=4diyarNwq84_Q
https://openreview.net/forum?id=4diyarNwq84_Q
h2PHdAgaU72jV
review
1,392,518,100,000
4diyarNwq84_Q
[ "everyone" ]
[ "Sam Bowman" ]
ICLR.cc/2014/conference
2014
review: While the arXiv paper is being held in the queue before publication, you can view the revised paper using this temporary link: http://www.stanford.edu/~sbowman/arxiv_submission.pdf
4diyarNwq84_Q
Can recursive neural tensor networks learn logical reasoning?
[ "Samuel R. Bowman" ]
Recursive neural network models and their accompanying vector representations for words have seen success in an array of increasingly semantically sophisticated tasks, but almost nothing is known about their ability to accurately capture the aspects of linguistic meaning that are necessary for interpretation or reasoning. To evaluate this, I train a recursive model on a new corpus of constructed examples of logical reasoning in short sentences, like the inference of 'some animal walks' from 'some dog walks' or 'some cat walks,' given that dogs and cats are animals. The results are promising for the ability of these models to capture logical reasoning, but the model tested here appears to learn representations that are quite specific to the templatic structures of the problems seen in training, and that generalize beyond them only to a limited degree.
[ "logical reasoning", "neural tensor networks", "ability", "accompanying vector representations", "words", "success", "array", "sophisticated tasks", "nothing" ]
https://openreview.net/pdf?id=4diyarNwq84_Q
https://openreview.net/forum?id=4diyarNwq84_Q
O1lo-X7Di_-hu
review
1,391,722,020,000
4diyarNwq84_Q
[ "everyone" ]
[ "anonymous reviewer 7747" ]
ICLR.cc/2014/conference
2014
title: review of Can recursive neural tensor networks learn logical reasoning? review: The paper tries to determine whether representations constructed with recursive embeddings can be used to support simple reasoning operations. The essential idea is to train an additional comparison layer that takes the representations of two sentences and produces an output that describes the relation between the two sentences (entailment, equivalence, etc.) This approach is in fact closely related to the 'restricted entailment operator' suggested near the end of Bottou's white paper http://arxiv.org/pdf/1312.6192v3.pdf. Experiments are carried out using a vastly simplified language and Socher's supervised training technique. According to the author, the results are a mixed bag. On the one hand, the system can learn to reason on sentences whose structure matches that of the training sentences. On the other hand, performance quickly degrades when using sentences whose structure did not appear in the training set. My reading of these results is much more pessimistic. I find completely unsurprising that the system can learn to 'reason' on sentences with known structure. On the other hand, the inability of the system to reason on sentences with new structure indicates that the recursive embedding network did not perform what was expected. The key of the recursive structure is to share weights across all applications of the grouping layer. This weight sharing was obviously insufficient to induce a bias that helps the system generalize to other structures. Whether this is a simple optimization issue or a more fundamental problem remains to be determined. My understanding is that the author always trains the system using the correct parsing structure in a manner similar to Socher's initial work (please confirm). It would be very interesting to investigate whether one obtains substantially different results if one trains the system using incorrect parsing structures (either a random structures or a left-to-right structure). Worse results would indicate that the structure of the recursive embeddings matters. Similar results would confirm the results reported in http://arxiv.org/abs/1301.2811 and strongly suggest that recursive embeddings do not live up to expectations. This would of course be a negative results, but negative results are sometimes more informative than mixed bags (and in my opinion very worth publishing.)
4diyarNwq84_Q
Can recursive neural tensor networks learn logical reasoning?
[ "Samuel R. Bowman" ]
Recursive neural network models and their accompanying vector representations for words have seen success in an array of increasingly semantically sophisticated tasks, but almost nothing is known about their ability to accurately capture the aspects of linguistic meaning that are necessary for interpretation or reasoning. To evaluate this, I train a recursive model on a new corpus of constructed examples of logical reasoning in short sentences, like the inference of 'some animal walks' from 'some dog walks' or 'some cat walks,' given that dogs and cats are animals. The results are promising for the ability of these models to capture logical reasoning, but the model tested here appears to learn representations that are quite specific to the templatic structures of the problems seen in training, and that generalize beyond them only to a limited degree.
[ "logical reasoning", "neural tensor networks", "ability", "accompanying vector representations", "words", "success", "array", "sophisticated tasks", "nothing" ]
https://openreview.net/pdf?id=4diyarNwq84_Q
https://openreview.net/forum?id=4diyarNwq84_Q
BBE-xcPXoDBlw
review
1,391,555,160,000
4diyarNwq84_Q
[ "everyone" ]
[ "Sam Bowman" ]
ICLR.cc/2014/conference
2014
review: Thanks for your comments. I am updating the paper now with some clarifications and typo repairs, and I'm in the process of setting up a few follow up experiments. Section 2: Thanks for pointing out the unclear bits here, especially that “some dogs bark” example, which I seem to have broken during some hasty final revisions. I'll post an updated version with this fixed shortly. To clarify some details here: - “Some' is, in fact, upward monotone in both arguments. - D is the domain of containing all possible objects of the type being compared. - The “^” symbol in column three was typeset incorrectly, and is meant to represent logical AND. Section 3: I did not try initializing the vectors with those used in any previous experiments (e.g. Socher’s or Mikolov’s). While that kind of initialization sounds promising in general, I think that the unambiguous fragment of English that I use is so different from ordinary English usage that it is unlikely that outside information from these sources would be helpful to the task. The pretraining settings that I experimented with involved first training the model on some or all of the pairs of individual words from Appendix B, annotated with the relations between them. I'm certainly sensitive to the concern that the model might be overparameterized, and I will see about getting a training curve together in the next week or two. Section 4: The numbers referenced in those subsections do refer to Table 2, but the '(as in 2)' reference is a mistake. Example 2 in Table 2 corresponds to 'Monotonicity with quantifier substitution.” Thanks for catching that, and expect a fix soon. Section 5: I agree that the all-split result is unsurprising, though I think it is useful as a sanity check to ensure that the model structure is usable for the task, and that the model isn't dramatically *under*parameterized. The three target datasets were chosen by hand: the choice of a fairly small number was necessary due to resource constraints, but the choices were arbitrary. I chose to focus on quantifier substation datasets so as to render the three settings (the last three columns of Table 4) most easily comparable across the three target datasets. The reference to 'potentially other similar datasets” could have been better put, but it refers to the fact that in each of the three experimental settings reported in Table 4, different criteria are used to decide which datasets are held out in training, and that all of these criteria involve how similar a given dataset is to the target dataset. You raise an important point about reproducibility. I would appreciate any suggestions about better ways to report results given the fluctuations during training. I may try to report statistics over the model’s performance over a range of iterations, or statistics over the model’s performance at a given iteration over several random re-initializations, and I am experimenting further with different ways of encouraging the model to converge. I would like to suggest that even the current results do show a broader reproducible pattern: high performance on SET-OUT and SUBCL.-OUT is possible but subject to instabilities in the training algorithm, whereas high performance on PAIR-OUT cannot be demonstrated with this model as configured.
4diyarNwq84_Q
Can recursive neural tensor networks learn logical reasoning?
[ "Samuel R. Bowman" ]
Recursive neural network models and their accompanying vector representations for words have seen success in an array of increasingly semantically sophisticated tasks, but almost nothing is known about their ability to accurately capture the aspects of linguistic meaning that are necessary for interpretation or reasoning. To evaluate this, I train a recursive model on a new corpus of constructed examples of logical reasoning in short sentences, like the inference of 'some animal walks' from 'some dog walks' or 'some cat walks,' given that dogs and cats are animals. The results are promising for the ability of these models to capture logical reasoning, but the model tested here appears to learn representations that are quite specific to the templatic structures of the problems seen in training, and that generalize beyond them only to a limited degree.
[ "logical reasoning", "neural tensor networks", "ability", "accompanying vector representations", "words", "success", "array", "sophisticated tasks", "nothing" ]
https://openreview.net/pdf?id=4diyarNwq84_Q
https://openreview.net/forum?id=4diyarNwq84_Q
-cDBcO5EQw7xT
review
1,392,446,640,000
4diyarNwq84_Q
[ "everyone" ]
[ "Sam Bowman" ]
ICLR.cc/2014/conference
2014
review: I have a fairly major unexpected update to report. In attempting to respond to 8e44's concerns about using early stopping to get around convergence issues, I discovered a mistake in my implementation of AdaGrad. In short, fixing that mistake led to much more consistent convergence and better results, including strong performance on the PAIR-OUT test settings, suggesting that the model is much more capable than I had previously suggested at generalizing to unseen reasoning patterns. I realize that this is somewhat late in the review process to make substantial changes, but a new version of the paper is pending on ArXiv and should be live by Monday. The results table and the discussion section have been replaced. I will also be updating the source code linked to above and in the paper before Monday to reflect this bug fix, and a couple of small improvements to the way that cost and test error is reported during training. If you are interested in what went wrong: I accidentally set up SGD with AdaGrad in such a way that it the sum of squared gradients after every full pass of the data, equivalent to every few hundred gradient updates. Since this sum is used to limit the size of the gradient updates, resetting it this often prevented the model from reliably converging without hurting gradient accuracy or preventing it from converging occasionally.
4diyarNwq84_Q
Can recursive neural tensor networks learn logical reasoning?
[ "Samuel R. Bowman" ]
Recursive neural network models and their accompanying vector representations for words have seen success in an array of increasingly semantically sophisticated tasks, but almost nothing is known about their ability to accurately capture the aspects of linguistic meaning that are necessary for interpretation or reasoning. To evaluate this, I train a recursive model on a new corpus of constructed examples of logical reasoning in short sentences, like the inference of 'some animal walks' from 'some dog walks' or 'some cat walks,' given that dogs and cats are animals. The results are promising for the ability of these models to capture logical reasoning, but the model tested here appears to learn representations that are quite specific to the templatic structures of the problems seen in training, and that generalize beyond them only to a limited degree.
[ "logical reasoning", "neural tensor networks", "ability", "accompanying vector representations", "words", "success", "array", "sophisticated tasks", "nothing" ]
https://openreview.net/pdf?id=4diyarNwq84_Q
https://openreview.net/forum?id=4diyarNwq84_Q
aGu4aBlHdVN-C
comment
1,391,838,420,000
O1lo-X7Di_-hu
[ "everyone" ]
[ "Sam Bowman" ]
ICLR.cc/2014/conference
2014
reply: I agree that the results so far are not as strongly positive or negative as would be ideal, and I hope to be able to report somewhat more conclusive results about the behavior of the optimization techniques that I use (see comments above), but I think that the results presented so far are informative about the ability of models like this to do RTE more generally. The SET-OUT results show that the model is able to learn to identify the difference between two unseen sentences and, if that difference has been seen before, return a consistent label that corresponds to that difference. Perhaps more important is the fact that the model shows 100% accuracy on unseen examples like 'some dog bark [entails] some animal bark” (seen in ALL-SPLIT for example) where lexical items differ between sides. Here the model is both learning to do this reasoning about differences, and learning to use information about entailment between lexical items (animal > dog) in novel environments. As you suggest, I do use correct hand-assigned parses in both training and testing. I agree that it would be interesting to see what effect using randomly assigned parses instead would have, and I may be able to get those numbers at least by the conference date. It does seem worth mentioning, though, that the sentences are mostly three or four words long, so I would expect that the parse structure would be far less important in these experiments than in ones with longer sentences (and thus more deeply nested tree structures), since every word is already quite close to the top of the composition tree regardless of the structure here. Since you brought up the (important) Scheible and Schuetze paper, I should mention that the prior motivation for using high quality parse structures for this task is considerably stronger than the motivation for using them in binary sentiment tasks like the one reported on in that paper. In binary sentiment labeling, the label is largely (but not entirely) dependent on the presence or absence of strongly sentiment expressing words, and decent performance (~80%) can be achieved using simple regression models with bigram or even unigram features. I don’t have exactly comparable numbers for the dataset that I present in this paper, but RTE/NLI does not lend itself to comparable quality baselines with simple features. My task is deliberately easier than the RTE challenge datasets, but the average tuned model submitted to the first RTE workshop in 2005 got less than 55% accuracy on *binary* entailment classification. There is some related discussion in the review thread for the Scheible and Schuetze paper: http://openreview.net/document/e2ffbffb-ba93-43d0-9102-f3e756e3f63c Thanks for the Bottou comparison, by the way. This does seem to me to be implementation of a slightly generalized version of his proposed restricted entailment operator, and I had not previously noticed that parallel.
4diyarNwq84_Q
Can recursive neural tensor networks learn logical reasoning?
[ "Samuel R. Bowman" ]
Recursive neural network models and their accompanying vector representations for words have seen success in an array of increasingly semantically sophisticated tasks, but almost nothing is known about their ability to accurately capture the aspects of linguistic meaning that are necessary for interpretation or reasoning. To evaluate this, I train a recursive model on a new corpus of constructed examples of logical reasoning in short sentences, like the inference of 'some animal walks' from 'some dog walks' or 'some cat walks,' given that dogs and cats are animals. The results are promising for the ability of these models to capture logical reasoning, but the model tested here appears to learn representations that are quite specific to the templatic structures of the problems seen in training, and that generalize beyond them only to a limited degree.
[ "logical reasoning", "neural tensor networks", "ability", "accompanying vector representations", "words", "success", "array", "sophisticated tasks", "nothing" ]
https://openreview.net/pdf?id=4diyarNwq84_Q
https://openreview.net/forum?id=4diyarNwq84_Q
XjeCUHWZr1Um5
review
1,391,787,840,000
4diyarNwq84_Q
[ "everyone" ]
[ "anonymous reviewer e76d" ]
ICLR.cc/2014/conference
2014
title: review of Can recursive neural tensor networks learn logical reasoning? review: This paper investigates the use of a recurrent model for logical reasoning in short sentences. An important part of the paper is dedicated to the description on the task and the way the author simplifies the task of MacCartney to keep only entailment relations that are non ambiguous. For the model, a simple recurrent tensor (from Socher's work) network is used. While the more general task defined by MacCartney is well described, the reduced task addressed in this paper is more unclear. The motivation stands: this is a great idea to reduce the task to non ambiguous cases, for which we could better interpret the experimental results. However, at the end, it is difficult to draw relevant conclusion from the experiments, and a lot of technical details are missing to yield the results reproducible. Maybe the author tried to lessen the negative aspects of the results, but it would be really more interesting to clearly describe negative results. My opinion is that this paper is not well suited for the conference track, and maybe it should be submitted to the workshop track.
4diyarNwq84_Q
Can recursive neural tensor networks learn logical reasoning?
[ "Samuel R. Bowman" ]
Recursive neural network models and their accompanying vector representations for words have seen success in an array of increasingly semantically sophisticated tasks, but almost nothing is known about their ability to accurately capture the aspects of linguistic meaning that are necessary for interpretation or reasoning. To evaluate this, I train a recursive model on a new corpus of constructed examples of logical reasoning in short sentences, like the inference of 'some animal walks' from 'some dog walks' or 'some cat walks,' given that dogs and cats are animals. The results are promising for the ability of these models to capture logical reasoning, but the model tested here appears to learn representations that are quite specific to the templatic structures of the problems seen in training, and that generalize beyond them only to a limited degree.
[ "logical reasoning", "neural tensor networks", "ability", "accompanying vector representations", "words", "success", "array", "sophisticated tasks", "nothing" ]
https://openreview.net/pdf?id=4diyarNwq84_Q
https://openreview.net/forum?id=4diyarNwq84_Q
30Yg49nHOtQfr
review
1,391,234,280,000
4diyarNwq84_Q
[ "everyone" ]
[ "anonymous reviewer 8e44" ]
ICLR.cc/2014/conference
2014
title: review of Can recursive neural tensor networks learn logical reasoning? review: In this work the author investigates how effective the vector representation of words is for the task of logical inference. A set of seven entailment relations from MaCartney are used, and a data set of 12,000 logical statements (of pairs of sentences) are generated from these relations and from 41 predicate tokens. The task is multiclass classification, where given two sentences, the system must output the correct relation between them. A simple recursive tensor network is used. The study is limited to considering quantifiers like 'some' and 'all', which have clear monotonicity properties. Results show that the model can learn but that generalization is limited. Unfortunately because the training process converges to an inferior model, results are given after very early stopping, in which subsequent iterations can give widely different results. This is an exciting direction for research and it's great to see it being tackled. Unfortunately, however, the paper is unclear in crucial places, and the training methodology is questionable. I would encourage the author to clarify the paper (especially for the likely non-linguist audience) and strengthen the training algorithm (in order to demonstrate usefully reproducible results). Even if the results remain negative, this would then still be of significant value to the community. Specific comments: Section 2 --------- Your example of 'some dogs bark' seems confused. For both arguments of 'some' (not just the first), the inference works if the argument is replaced by something more general. You write that 'some' is downward monotonic in its second argument, but your examples show upward monotonicity in both. (Specifically, you write 'The quantifier 'some' is upward monotone in its first argument because it permits substitution of more general terms, and downward monotone in its second argument because it permits the substitution of more specific terms.' - but in the same paragraph you also write that 'some' is upward monotonic in both arguments.) Readers who are asked to expend mental energy on disentangling unnecessary confusions like this can quickly lose motivation. Table 1: this table is central to your work, but it needs more explanation. What is calligraphic D? You seem to be using the 'hat' operator in two different senses (column 2 versus column 3). What does 'else' mean in column 3 - how exactly is independence defined? The whole paper rests on MacCartney's framework, so I think it's necessary to explain more about this scheme here. In particular, I do not understand your 'no animals bark | some dogs bark' example (and I fear most others won't, too). Typo: much hold --> must hold Section 3 --------- 'Several pretraining regimes meant to initialize the word vectors to reflect the relations between them were tried, but none offered a measurable improvement to the learned model' - please say which were tried. In particular, did you try fixed, off-the-shelf vectors, for example from Socher's work, or trained using a large unlabeled dataset using Mikolov's Word2Vec? I counted 4624 parameters in the composition parameters, 13,005 for the comparison layer (with dimension 45), and 800 for the 50 (16 dimensional) word vectors, giving a total of 18,429 parameters. Your training set size is quite a bit smaller than this, and regularization can only help so much. I wonder if the limited results and difficulty of training (converging, but to poor solutions) just indicates the need for more training data. You can test this by generating a training curve - that is, plot performance on a validation set for various training set sizes (when trained to convergence, and using whatever regularization you settle on). If the curve is still steep when using all the training data, then more data will help. If it's flat, then the task may not be learnable with the model used. Section 4 --------- Your description of the 'basic monotonicity' datasets was unclear to me. Does 1, 2 refer to Table 2? If so it's not clear how 'In some of the datasets (as in 1), this alternation is in the first argument, in some the second argument (as in 2), and in some both.' Section 5 --------- It is not very surprising that the model can learn the all-split data, since the training data is so tightly constrained. Set-out is also very close to the training data. I found that exactly how the data splits were done, and what was tested on, was unclear. typo: 'the it is' 'I choose one of three target datasets' - how did you choose the three? (From the 200?) 'potentially other similar datasets...' is imprecise. How did you choose? 'The model did not converge well for any of these experiments: convergence can take hundreds or thousands of passes through the data, and the performance of the model at convergence on test data was generally worse than its performance during the first hundred or so iterations. To sidestep this problem somewhat, I report results here for the models learned after 64 passes through the data.' I'm afraid that this greatly reduces the value of these results (they are close to being irreproducible). The training algorithm should at least converge, or be more reproducible than this. (If the test error is still fluctuating wildly on the stopping iteration, other small changes, e.g. in the data, may give completely different results). Section 6 --------- 'Pessimistically... Optimistically... ' this is speculation (neither is supported by the experiments) and so I don't think it adds much value.
4diyarNwq84_Q
Can recursive neural tensor networks learn logical reasoning?
[ "Samuel R. Bowman" ]
Recursive neural network models and their accompanying vector representations for words have seen success in an array of increasingly semantically sophisticated tasks, but almost nothing is known about their ability to accurately capture the aspects of linguistic meaning that are necessary for interpretation or reasoning. To evaluate this, I train a recursive model on a new corpus of constructed examples of logical reasoning in short sentences, like the inference of 'some animal walks' from 'some dog walks' or 'some cat walks,' given that dogs and cats are animals. The results are promising for the ability of these models to capture logical reasoning, but the model tested here appears to learn representations that are quite specific to the templatic structures of the problems seen in training, and that generalize beyond them only to a limited degree.
[ "logical reasoning", "neural tensor networks", "ability", "accompanying vector representations", "words", "success", "array", "sophisticated tasks", "nothing" ]
https://openreview.net/pdf?id=4diyarNwq84_Q
https://openreview.net/forum?id=4diyarNwq84_Q
al48JYvqPDJr_
review
1,387,867,380,000
4diyarNwq84_Q
[ "everyone" ]
[ "Sam Bowman" ]
ICLR.cc/2014/conference
2014
review: Source code and data are available here: http://goo.gl/PSyF5u I'll be updating the paper shortly to add a link to the text.
pAi8PkmKuJPvU
Nonparametric Weight Initialization of Neural Networks via Integral Representation
[ "Sho Sonoda", "Noboru Murata" ]
A new initialization method for hidden parameters in a neural network is proposed. Derived from the integral representation of the neural network, a nonparametric probability distribution of hidden parameters is introduced. In this proposal, hidden parameters are initialized by samples drawn from this distribution, and output parameters are fitted by ordinary linear regression. Numerical experiments show that backpropagation with proposed initialization converges faster than uniformly random initialization. Also it is shown that the proposed method achieves enough accuracy by itself without backpropagation in some cases.
[ "hidden parameters", "neural networks", "integral representation", "neural network", "backpropagation", "nonparametric weight initialization", "new initialization", "nonparametric probability distribution", "proposal" ]
https://openreview.net/pdf?id=pAi8PkmKuJPvU
https://openreview.net/forum?id=pAi8PkmKuJPvU
fMpXedxjhIfFo
review
1,391,460,420,000
pAi8PkmKuJPvU
[ "everyone" ]
[ "anonymous reviewer 0a0f" ]
ICLR.cc/2014/conference
2014
title: review of Nonparametric Weight Initialization of Neural Networks via Integral Representation review: The paper proposes a new pre-training scheme for neural networks which are to be fine-tuned later with back propagation. This pre-training scheme is done in two steps 1 - sampling the parameters of the first layer using Importance sampling or an accept-reject MCMC method (both methods are apparently confused by the authors) in a data dependent way. 2 - train the parameters of the output layer using linear regression. The experiments compare the test RMSE/error-rate obtained using traditional back propagation and that obtained using the proposed method, on three datasets: a 1D function, a Boolean function and the Mnist dataset. The proposed pre-training scheme is new but the scientific quality of the paper is questionable. First the proposed method is given a misleading name since it proposes to do the initialization in a data dependent way (with a linear regression step). This may be understood as a 'pre-training scheme', not as an 'initialization'. Second, the paper is very misleading in its report of previous work, for instance stating that (Efficient Backprop, Le Cun 1998) proposes to initialize neural networks by sampling from a uniform distribution [-1/sqrt(fan-in);1/sqrt(fan-in)] when it suggests in fact to sample from a normal distribution of mean zero and standard deviation sigma=1/sqrt(fan-in). Additionally, the experiments are again very misleading. First, the main claim of the paper is that using the proposed pre-training scheme, BP will converge faster. However, the time to convergence is reported in terms of the number of BP iterations and does not take the pre-training time into account. This is especially worrisome since the pre-training scheme relies on MCMC sampling which is usually very computationally expensive compared to back propagation. Finally, the results reported on the Mnist dataset are inconsistent with previous work when then give a test error rate for back-propagation and 300 hidden units around 90% when it should be around 1.6% (cf. Mnist dataset website). pros: cons: - Misleading summary of previous work. - Misleading reference to an initialization strategy which is in fact a data dependent pre-training step. - Experiments do not report the pre-training time and are therefore strongly biased in favor of the proposed method. - Results on Mnist are inconsistent with previous work.
pAi8PkmKuJPvU
Nonparametric Weight Initialization of Neural Networks via Integral Representation
[ "Sho Sonoda", "Noboru Murata" ]
A new initialization method for hidden parameters in a neural network is proposed. Derived from the integral representation of the neural network, a nonparametric probability distribution of hidden parameters is introduced. In this proposal, hidden parameters are initialized by samples drawn from this distribution, and output parameters are fitted by ordinary linear regression. Numerical experiments show that backpropagation with proposed initialization converges faster than uniformly random initialization. Also it is shown that the proposed method achieves enough accuracy by itself without backpropagation in some cases.
[ "hidden parameters", "neural networks", "integral representation", "neural network", "backpropagation", "nonparametric weight initialization", "new initialization", "nonparametric probability distribution", "proposal" ]
https://openreview.net/pdf?id=pAi8PkmKuJPvU
https://openreview.net/forum?id=pAi8PkmKuJPvU
W_KH_Oaqtx_iI
comment
1,392,718,380,000
lFiWFX2hzO3DM
[ "everyone" ]
[ "園田翔" ]
ICLR.cc/2014/conference
2014
reply: Dear Reviewer 2ac8 Thanks for your constructive comments. I am really glad for your reading of our paper. We have rearranged MNIST experiment in a closer setting to LeCun et al.1998, and improved both error rates and training speeds. The paper with this results will be soon appeared in a few days. In addition, we supplemented a detailed explanation of sampling procedures in section A. While LeCun et al.1998 achieved 4.7% error rates, our latest test error rates improved as follows: SR: 23.0%(right after initialization) -> 9.94%(after BP training) SBP: 90.0%(right after initialization) -> 8.30%(after BP training) BP: 90.0%(right after initialization) -> 8.77%(after BP training) And our further experiments showed that SR with 6,000 hidden units marked 3.66% test error rates right after initialization. > It would be really helpful to have a notion of how expensive it is compute the approximation of the parameter density and to sample from it. Judging from the formulas this does not seem cheap. As you expected, rigorous calculation/sampling of the oracle distribution is difficult, especially in a high dimensional input space. This sampling difficulty is discussed in a supplemental section A.1. In order to draw samples from a high dimensional oracle distribution, therefore, we have developed and used an annealed sampling technique, which is described in a supplemental section A.2. Also, in the rearranged MNIST experiment section, the empirical sampling time was listed. > The paper studies networks with sigmoid pairs. What can the authors say about sigmoid units? As our method is derived from an analysis of sigmoid pairs networks (SPNs), less study is done for completely discrete sigmoid units networks (SUNs). Direct derivation is that an SPN with J sigmoid pairs might have an equivalent representation ability to an SUN with 2J sigmoid units. Our preliminary experiments empirically supports this hypothesis, that is, SR initialized SPN with J sigmoid pairs sometimes scores almost equivalent error rate with BP trained SUN with 2J sigmoid pairs. However the precise comparison is not conducted. Obviously SPN is always SUN, however the converse that an well trained SUN forms SPN, is doubtful. In relation to other integral representation study, some authors( Carroll and Dickinson1989; Barron1993; Kurkova2009) published on the integral representation of SUN and they would be help. In our authority paper Murata1996, the SPN requirement comes from the integrability of the composing kernel, and the author suggests that the derivative of sigmoid units (which is bell shaped and integrable) is also eligible, in which case a SUN is interpreted as approximating a derivative of target function. > In Figure 1 left, the figure does not show that the support is non-convex, as claimed in the caption. > The axes labels in Figure 1 are too small. I am sorry for my lack of attention, I have replaced Figure 1 as correct version. I am looking forward to your reply, thanks. Sonoda
pAi8PkmKuJPvU
Nonparametric Weight Initialization of Neural Networks via Integral Representation
[ "Sho Sonoda", "Noboru Murata" ]
A new initialization method for hidden parameters in a neural network is proposed. Derived from the integral representation of the neural network, a nonparametric probability distribution of hidden parameters is introduced. In this proposal, hidden parameters are initialized by samples drawn from this distribution, and output parameters are fitted by ordinary linear regression. Numerical experiments show that backpropagation with proposed initialization converges faster than uniformly random initialization. Also it is shown that the proposed method achieves enough accuracy by itself without backpropagation in some cases.
[ "hidden parameters", "neural networks", "integral representation", "neural network", "backpropagation", "nonparametric weight initialization", "new initialization", "nonparametric probability distribution", "proposal" ]
https://openreview.net/pdf?id=pAi8PkmKuJPvU
https://openreview.net/forum?id=pAi8PkmKuJPvU
VVPzN959oZ6un
comment
1,392,720,120,000
07ToXaIwBiYKp
[ "everyone" ]
[ "園田翔" ]
ICLR.cc/2014/conference
2014
reply: Dear Reviewer a97a Thanks for your constructive comments. I am really glad for your reading of our paper. We have rearranged the MNIST experiment, and replaced the results with new ones. In addition, we supplemented a detailed explanation of sampling procedures in section A. The renewed version of the paper will be soon appeared online in a few days. > It would be useful to have more information on the order of magnitude by which the method is slower/faster compared to training a classically initialized neural network, and how does the method scale with the number of data points and the dimensions of the input space. Theoretical considerations on sampling cost is discussed in the supplemental section A, and the empirical measurement of computation time is listed in the rearranged MNIST experiment section. We have introduced a drastically annealed sampling technique in section A.2, which is as fast as sampling from normal distribution. Here is a snippet of time comparison: - sampling time of SR: 0.0115 [sec.] - regression time of SR: 2.60 [sec.] - 45,000 iterations of BP training of SR: 2000 [sec.] (0.05 [sec.] per one itr.) Theoretically the annealed sampling scales linearly with the number of required hidden parameters and the dimensionality of the input space respectively. In particular it scales constantly with the number of training examples because it conducts sampling with one particular example. > I am concerned about the validity of the MNIST experiment where a baseline error of >0.8 (80%?) is obtained with 1000 samples while other papers typically report 10% error for similar amount of data. We have continued further investigations on MNIST dataset in a closer setting to LeCun et al.1998, and improved both error rates and training speeds. While LeCun et al.1998 achieved 4.7% error rates in a similar setting to ours, our latest test error rates improved as follows: SR: 23.0%(right after initialization) -> 9.94%(after BP training) SBP: 90.0%(right after initialization) -> 8.30%(after BP training) BP: 90.0%(right after initialization) -> 8.77%(after BP training) And our further experiments showed that SR with 6,000 hidden units marked 3.66% test error rates right after initialization. I am looking forward to your reply, thanks. Sonoda
pAi8PkmKuJPvU
Nonparametric Weight Initialization of Neural Networks via Integral Representation
[ "Sho Sonoda", "Noboru Murata" ]
A new initialization method for hidden parameters in a neural network is proposed. Derived from the integral representation of the neural network, a nonparametric probability distribution of hidden parameters is introduced. In this proposal, hidden parameters are initialized by samples drawn from this distribution, and output parameters are fitted by ordinary linear regression. Numerical experiments show that backpropagation with proposed initialization converges faster than uniformly random initialization. Also it is shown that the proposed method achieves enough accuracy by itself without backpropagation in some cases.
[ "hidden parameters", "neural networks", "integral representation", "neural network", "backpropagation", "nonparametric weight initialization", "new initialization", "nonparametric probability distribution", "proposal" ]
https://openreview.net/pdf?id=pAi8PkmKuJPvU
https://openreview.net/forum?id=pAi8PkmKuJPvU
vf_Vs2hRaxUEP
comment
1,392,700,920,000
fMpXedxjhIfFo
[ "everyone" ]
[ "園田翔" ]
ICLR.cc/2014/conference
2014
reply: Dear Reviewer 0a0f Thanks for your detailed and helpful comments. I apologize for my ambiguous description, I am really glad for your reading of our paper. We have supplemented a detailed explanation of sampling procedure, and replaced the MNIST experiment with further investigated version. The sampling algorithm runs as quick as sampling from ordinary distributions such as normal distribution. The updated paper is to be online in a few days. > 1 - sampling the parameters of the first layer using Importance sampling or an accept-reject MCMC method (both methods are apparently confused by the authors) in a data dependent way. Certainly I misused 'importance sampling' without following explanations, they are replaced in the modified paper. It is also explained in the supplementary section A that we just used acceptance-rejection sampling method, and not MCMC. As the oracle distribution usually has an extremely multimodal shape, MCMC could not perform sufficiently. One of our preliminary experiment showed that they often fail to find some of modes. > This may be understood as a 'pre-training scheme', not as an 'initialization'. I have taken you meant that: 1) both 'pre-training' and 'initialization' are preceding processes to the real training, 2) 'pre-training' is associated with data and 'initialization' is not, 3) the proposed method contains data dependent sampling and regression, which obviously use the data, 4) therefore, we should call it 'pre-training' instead of 'initialization'. I am sorry if I am misunderstanding. I completely agree to 1), whereas for 2), I think there would be indefiniteness. I recognize that 'pre-training' has narrow meaning, which typically reminds people 'unsupervised' such as RBMs and Stacking AEs (and their variations). While our oracle distribution contains the information of both input and output vectors (see, for instance, Eq.3 and 8). On the other hand I think that 'initialization' has broader meaning, independent of using given data or not. As I surveyed in Section 1, many types of 'initialization's have been proposed and some of them contains data dependent way such as linear regression (Yam and Chow) and prototypes (Denoeux and Lengelle). Therefore I feel less necessity for calling it 'pre-training'. > for instance stating that (Efficient Backprop, Le Cun 1998) proposes to initialize neural networks by sampling from a uniform distribution [-1/sqrt(fan-in);1/sqrt(fan-in)] when it suggests in fact to sample from a normal distribution of mean zero and standard deviation sigma=1/sqrt(fan-in). Thanks again for your precise correction. I have corrected the description from range to standard deviation of the distribution. I am afraid, however, in Efficient Backprop, 'normal' distribution is not necessarily required. > does not take the pre-training time into account. > This is especially worrisome since the pre-training scheme relies on MCMC sampling which is usually very computationally expensive compared to back propagation Sorry for my lack of attention since we did not use MCMC and the sampling time was enough quicker than BP iterations. We added a list of time comparison in the renewed MNIST experimental section. Here is a snippet of time comparison: - sampling time of SR: 0.0115 [sec.] - regression time of SR: 2.60 [sec.] - 45,000 iterations of BP training of SR: 2000 [sec.] (0.05 [sec.] per one itr.) > inconsistent with previous work when then give a test error rate for back-propagation and 300 hidden units around 90% when it should be around 1.6% (cf. Mnist dataset website). The inconsistency was caused by the difference between experimental settings: 1) the number of hidden units: same (300 units) 2) the number of hidden layers: same (1 layer) 3) scaling of input vectors: NOT same - In LeCun et al.1998 (the website setting) the input vectors were scaled, while ours not. -> We scaled them in the renewed setting 4) preprocessing of input vectors: NOT same - In LeCun et al.1998, 1.6% marking '2-layer NN, 300 HU' used 'deskew'ed image, while we do not. Therefore, 4.7% marking '2-layer NN, 300 hidden units, mean square error' should be the closest setting. It still differs in that they used mean square error, while we used cross-entropy loss. -> We set our goal around 4.7% 5) the representation of output labels: NOT same - In our previous setting we used 'One-of-k' coding, while LeCun et al.1998 not. -> We rearranged vectors as 'random coding' it still differs but more standard and efficient setting. 6) the number of training examples: NOT same - In LeCun et al.1998, they used 15,000 and more, while we used just 1,000. -> In our renewed setting, we used 15,000 examples for training. Our latest test error rates improved as follows: SR: 23.0%(right after initialization) -> 9.94%(after BP training) SBP: 90.0%(right after initialization) -> 8.30%(after BP training) BP: 90.0%(right after initialization) -> 8.77%(after BP training) Also, SR with 6,000 hidden units marked 3.66% test error rates right after initialization. I am looking forward to your reply, thanks. Sonoda
pAi8PkmKuJPvU
Nonparametric Weight Initialization of Neural Networks via Integral Representation
[ "Sho Sonoda", "Noboru Murata" ]
A new initialization method for hidden parameters in a neural network is proposed. Derived from the integral representation of the neural network, a nonparametric probability distribution of hidden parameters is introduced. In this proposal, hidden parameters are initialized by samples drawn from this distribution, and output parameters are fitted by ordinary linear regression. Numerical experiments show that backpropagation with proposed initialization converges faster than uniformly random initialization. Also it is shown that the proposed method achieves enough accuracy by itself without backpropagation in some cases.
[ "hidden parameters", "neural networks", "integral representation", "neural network", "backpropagation", "nonparametric weight initialization", "new initialization", "nonparametric probability distribution", "proposal" ]
https://openreview.net/pdf?id=pAi8PkmKuJPvU
https://openreview.net/forum?id=pAi8PkmKuJPvU
07ToXaIwBiYKp
review
1,391,926,320,000
pAi8PkmKuJPvU
[ "everyone" ]
[ "anonymous reviewer a97a" ]
ICLR.cc/2014/conference
2014
title: review of Nonparametric Weight Initialization of Neural Networks via Integral Representation review: This paper introduces a new method for initializing the weights of a neural network. The technique is based on integral transforms. The function to learn f is represented as an infinite combination of basis functions weighted by some distribution. Conversely, this distribution can be obtained by projecting the function f onto another (related) set of basis functions evaluated at every point x in the input space. The powerful analytic framework yields a probability distribution from which initial parameters of the neural network can be sampled. This is done using an acceptance-rejection sampling method. In order to overcome the computational inefficiency of the basic procedure, the authors propose a coordinate transform method that reduces the rejection rate. It would be useful to have more information on the order of magnitude by which the method is slower/faster compared to training a classically initialized neural network, and how does the method scale with the number of data points and the dimensions of the input space. The experimental section consists of three experiments measuring the convergence of learning for various datasets (two low-dimensional toy examples and MNIST). On the low-dimensional toy examples, the proposed initialization is shown to be superior to uniform. However, these two datasets are to a certain extent already well modeled by local methods, for which good initialization heuristics are readily available (e.g. RBF networks + k-means). I am concerned about the validity of the MNIST experiment where a baseline error of >0.8 (80%?) is obtained with 1000 samples while other papers typically report 10% error for similar amount of data.
pAi8PkmKuJPvU
Nonparametric Weight Initialization of Neural Networks via Integral Representation
[ "Sho Sonoda", "Noboru Murata" ]
A new initialization method for hidden parameters in a neural network is proposed. Derived from the integral representation of the neural network, a nonparametric probability distribution of hidden parameters is introduced. In this proposal, hidden parameters are initialized by samples drawn from this distribution, and output parameters are fitted by ordinary linear regression. Numerical experiments show that backpropagation with proposed initialization converges faster than uniformly random initialization. Also it is shown that the proposed method achieves enough accuracy by itself without backpropagation in some cases.
[ "hidden parameters", "neural networks", "integral representation", "neural network", "backpropagation", "nonparametric weight initialization", "new initialization", "nonparametric probability distribution", "proposal" ]
https://openreview.net/pdf?id=pAi8PkmKuJPvU
https://openreview.net/forum?id=pAi8PkmKuJPvU
FNE-gqxSuzFWo
review
1,392,700,980,000
pAi8PkmKuJPvU
[ "everyone" ]
[ "園田翔" ]
ICLR.cc/2014/conference
2014
review: In responce to our reviewer's comments, we have supplemented a detailed explanation of sampling procedure, and replaced the MNIST experiment with further investigated version. The sampling algorithm runs as quick as sampling from ordinary distributions such as normal distribution. The updated paper is to be online in a few days.
pAi8PkmKuJPvU
Nonparametric Weight Initialization of Neural Networks via Integral Representation
[ "Sho Sonoda", "Noboru Murata" ]
A new initialization method for hidden parameters in a neural network is proposed. Derived from the integral representation of the neural network, a nonparametric probability distribution of hidden parameters is introduced. In this proposal, hidden parameters are initialized by samples drawn from this distribution, and output parameters are fitted by ordinary linear regression. Numerical experiments show that backpropagation with proposed initialization converges faster than uniformly random initialization. Also it is shown that the proposed method achieves enough accuracy by itself without backpropagation in some cases.
[ "hidden parameters", "neural networks", "integral representation", "neural network", "backpropagation", "nonparametric weight initialization", "new initialization", "nonparametric probability distribution", "proposal" ]
https://openreview.net/pdf?id=pAi8PkmKuJPvU
https://openreview.net/forum?id=pAi8PkmKuJPvU
lFiWFX2hzO3DM
review
1,391,811,900,000
pAi8PkmKuJPvU
[ "everyone" ]
[ "anonymous reviewer 2ac8" ]
ICLR.cc/2014/conference
2014
title: review of Nonparametric Weight Initialization of Neural Networks via Integral Representation review: This paper presents a new method for initializing the parameters of a feedforward neural network with a single hidden layer. The idea is to sample the parameters from a data-dependent distribution computed as an approximation of a kernel transformation of the target distribution. * The parameter initialization problem is important and the main idea of the paper is interesting. Now, computing the transformation of the target distribution is the same as solving an equation for the parameters of the network, analytically, assuming an unlimited number of hidden units. As this is a difficult problem, the method relies on an approximation of the parameter density and sampling therefrom N times when the actual network is assumed to have N hidden units. * It would be really helpful to have a notion of how expensive it is compute the approximation of the parameter density and to sample from it. Judging from the formulas this does not seem cheap. * The paper studies networks with sigmoid pairs. What can the authors say about sigmoid units? In Figure 1 left, the figure does not show that the support is non-convex, as claimed in the caption. The axes labels in Figure 1 are too small.
DETu4zMyQH4kV
Semistochastic Quadratic Bound Methods
[ "Aleksandr Y. Aravkin", "Anna Choromanska", "Tony Jebara", "Dimitri Kanevsky" ]
Partition functions arise in a variety of settings, including conditional random fields, logistic regression, and latent gaussian models. In this paper, we consider semistochastic quadratic bound (SQB) methods for maximum likelihood inference based on partition function optimization. Batch methods based on the quadratic bound were recently proposed for this class of problems, and performed favorably in comparison to state-of-the-art techniques. Semistochastic methods fall in between batch algorithms, which use all the data, and stochastic gradient type methods, which use small random selections at each iteration. We build semistochastic quadratic bound-based methods, and prove both global convergence (to a stationary point) under very weak assumptions, and linear convergence rate under stronger assumptions on the objective. To make the proposed methods faster and more stable, we consider inexact subproblem minimization and batch-size selection schemes. The efficacy of SQB methods is demonstrated via comparison with several state-of-the-art techniques on commonly used datasets.
[ "methods", "comparison", "techniques", "variety", "settings", "conditional random fields", "logistic regression", "latent gaussian models", "semistochastic quadratic bound" ]
https://openreview.net/pdf?id=DETu4zMyQH4kV
https://openreview.net/forum?id=DETu4zMyQH4kV
MJ29KaZP96KsB
review
1,392,710,280,000
DETu4zMyQH4kV
[ "everyone" ]
[ "Anna Choromanska" ]
ICLR.cc/2014/conference
2014
review: We would like to thank all the reviewers for their comments. The paper after revisions was posted on arxiv today and will be soon publicly visible. Below we enclose answers to reviewers comments: Reviewer 1: We thank the reviewer for the comments. We have highlighted representation learning as an important application in the introduction, and furthermore pointed out that our majorization methods have been applied to this area in previous work in the context of batch learning. Since the main contribution of this paper is to propose a semi-stochastic extension, the results are directly applicable to representation learning. Specific extensions for deep learning are noted as future work in the conclusions. Reviewer 2: We thank the reviewer for the comments. 1) Hessian-free optimization (also known as truncated Newton methods) solve Newton systems (and damped/inexact variants) inexactly to obtain descent directions. While classic methods use full gradients (and so are not competitive in the stochastic setting, w.r.t passes through the data), recent work also uses subsampling for gradient and Hessian approximations. Depending on the accuracy to which second order terms are approximated, these methods may require more or less storage than the method we propose. We emphasize that our main contribution in this paper is to show majorization methods can be used in a semi-stochastic setting to compete with state-of-the-art fully stochastic methods, like SGD, ASGD or SAG, with respect to passes through the data. 2) We presented the results on six datasets, three of them are sparse and the remaining ones are dense. The number of examples in these datasets are between 12K and 580K. Among these datasets we have some with 4932 dimensions and 46236 dimensions. These are not small datasets, and in fact commonly used to test new algorithms in the community (see e.g. Le Roux, Schmidt, Bach 2012). With regard to Hessian-free optimization, we leave it to future work to compare majorization-based schemes with truncated Newton-based schemes. As far as we know, Hessian-free optimization methods have not been compared with state-of-the-art stochastic methods with respect to passes through the data. Also, in the batch setting, majorization methods have recently performed favorably in comparison to full Newton methods and quasi-Newton methods. 3) Theorem 1 part 1 is correct - it neither relies on E[Sigma_S^{-1}] = Sigma^{-1}, nor claims the average direction is unbiased. The point is that the average direction (s^k in the paper), although biased, is still gradient related, which we show in part 2. Note that in the definition of s^k = E[(Sigma_S^k + eta I)^{-1}] g^k, the expectation is taken after the inverse. Using the framework of Bertsekas and Tsitsiklis, it is enough to have the average direction be gradient related 'enough', which is shown in (7). 4) Majorization methods have been shown to be competitive with second order optimization methods in prior work, in the batch setting. The main contribution of this paper is to make these methods competitive in the stochastic setting. While the number of samples used to approximate the gradient increases across iterations until the entire training data is eventually used (we control this rate of increase as explained in the experimental section), the number of samples used to obtain the second order bound-based approximation remains capped at a small number, as also explained in the experimental section. Growing the mini-batch is essential to achieve linear convergence rate as captured in Theorem 2; otherwise the rate of convergence would be sublinear, which is the typical convergence rate of state-of-the-art fully stochastic methods, like e.g. SGD. It is not clear what is meant by 'if other methods grow their minibatch'. It is always possible that other methods can be improved, either using some ideas in this paper or other ideas. Our main contribution is to show that majorization-based methods are competitive with state-of-the-art in the stochastic setting. Reviewer 3: We thank the reviewer for the comments. 1) The summary of the paper provided by the reviewer is a nice; however `quadratic bound' is a better title, since the quadratic approximation is based on the bound for the partition function, rather than the Hessian or a standard Hessian approximation. 2) We leave comparisons with Byrd et al. to future work as their setting differs from ours. In particular Byrd et. al. do not compare to state-of-the-art stochastic methods in their 2012 paper; their experiments show that dynamically growing the batch size is faster than a full batch method of the same type, and more accurate than a Newton-type method that uses a fixed sample size; they also compare with OWL. In their 2011 paper, they consider a batch-gradient, sampled Hessian approach. 3) We refer the reviewer to Jebara & Choromanska 2012, where the bound method was shown to outperform BFGS and Newton methods in the batch setting (for convex problems), intuitively due to adapting not only locally, like most methods including gradient methods and Newton-like methods, but also globally, to the underlying optimization problem. The main motivation is that when a global tight bound is used, optimization methods based on majorization are competitive with standard optimization methods for problems involving the partition function. 4) We used the phrase 'maximum likelihood inference' in the context of maximum likelihood estimation. Thanks for pointing out this issue, the text has now been corrected. 5) Different batching strategies indeed have a long history. We found that the theoretical arguments in Friedlander & Schmidt (2011) strongly motivate a batch growing scheme, which we call semi stochastic, and which they also call 'hybrid' (on the other hand, by stochastic we mean having fixed size mini-batch). They show that the convergence rate is a function of the conditioning of the problem and the degree of error in the gradient approximation; therefore, as long as the latter is dominated by the former, we can recover a linear rate in the stochastic setting. 6) We indeed used word 'iterations' to refer to passes through the data in the phrase pointed by the reviewer. Thanks for pointing out this issue - the text has now been corrected. 7) We have corrected the introduction according to reviewers suggestions. We have added the comment that we focus on partition functions, which are of central interests in many learning problems, like training CRFs or log-linear models. We also indicated early in the introduction that we will be extending the work of Jebara & Choromanska (2012) to the semi stochastic setting. 8) The Bound Computation subroutine is directly taken from the previous work of Jebara & Choromanska (2012) (see Algorithm 1). The order indeed matters and slightly affects S, but not z or r. Jebara & Choromanska (2012) investigated various ordering schemes and noted no significant difference in performance (see page 2). 9) The reviewer is correct,n was indeed undefined. n is now defined when Omega is introduced. 10) Omega is indeed the set of values that y can take. We thank the reviewer, this issue has been fixed. 11) We agree with the reviewer - current results show that curvature information can be incorporated in a way that guarantees convergence to stationarity under weak assumptions (Theorem 1) and the recovery of a linear rate provided an aggressive batch growing scheme for logistic regression (Theorem 2); in both cases the use curvature approximations from samples is not proven to help. We are also interested in finding a stronger result of the type that the reviewer is suggesting, perhaps under suitable assumptions on the problem class, but we leave this to future work. 12) Note that additional requirements are required to ensure a uniform lower bound on the smallest eigenvalue in the absence of regularization. When eta is present, it provides such a lower bound; so we have removed the corollary and simply noted this. 13) The problem with missing rho was corrected. We thank the reviewer for pointing this out. 14) We agree with the point made by the reviewer - one recovers the linear rate by essentially controlling the error from sampling to be bounded by the geometric terms from the deterministic setting. The reviewer's point is well taken - the theorem does not show that inverse of the curvature matrix is actually helping, as the empirical results suggest. 15) The fact that inexact solutions to subproblems can be interpreted as regularization is often used in the inverse problems community, and explained in some detail in Vogel's book. This is in fact the reference we wanted to cite, and we are very grateful that the reviewer caught this error! We have also added the reference that the reviewer suggested. 16) We thank the reviewer for the remark about Page 7, section 12.1, Lemma 4 - this has been noted prior to the presentation of the lemma.
DETu4zMyQH4kV
Semistochastic Quadratic Bound Methods
[ "Aleksandr Y. Aravkin", "Anna Choromanska", "Tony Jebara", "Dimitri Kanevsky" ]
Partition functions arise in a variety of settings, including conditional random fields, logistic regression, and latent gaussian models. In this paper, we consider semistochastic quadratic bound (SQB) methods for maximum likelihood inference based on partition function optimization. Batch methods based on the quadratic bound were recently proposed for this class of problems, and performed favorably in comparison to state-of-the-art techniques. Semistochastic methods fall in between batch algorithms, which use all the data, and stochastic gradient type methods, which use small random selections at each iteration. We build semistochastic quadratic bound-based methods, and prove both global convergence (to a stationary point) under very weak assumptions, and linear convergence rate under stronger assumptions on the objective. To make the proposed methods faster and more stable, we consider inexact subproblem minimization and batch-size selection schemes. The efficacy of SQB methods is demonstrated via comparison with several state-of-the-art techniques on commonly used datasets.
[ "methods", "comparison", "techniques", "variety", "settings", "conditional random fields", "logistic regression", "latent gaussian models", "semistochastic quadratic bound" ]
https://openreview.net/pdf?id=DETu4zMyQH4kV
https://openreview.net/forum?id=DETu4zMyQH4kV
f6UJcohNWS-vg
review
1,392,137,580,000
DETu4zMyQH4kV
[ "everyone" ]
[ "anonymous reviewer 7b7e" ]
ICLR.cc/2014/conference
2014
title: review of Semistochastic Quadratic Bound Methods review: This paper looks at performing a stochastic truncated Newton method to general linear models (GLMs), utilizing a bound to the partition function from Jebra & Choromanska (2012). Some basic theory is given, along with some experiments with logistic regression. Stochastic or semi-stochastic truncated Newton methods such as Hessian-free optimization, and the work of Byrd et al. have already been applied to learning neural networks whose objective functions correspond to the negative LL of neural networks prediction under cross entropy error, which is like taking a GLM replacing theta in the definition expression with g(theta), where g is the neural network function. In the special case that g=I this correspond exactly to logistic regression. One thing I'm very confused about is what the bounding scheme of Jebra & Choromanska (2012) that is applied in this paper actually does. Since it involves summing over all possible states, it can't be more efficient than just computing the partition function, and its various derivatives and second-directives directly. Why use it then? The Hessian of the negative LL of a general linear model will already be PSD, so that can't be the reason. Detailed comments: Abs: What do you mean by 'maximum likelihood inference'? Do you mean estimation? Learning? Page 1: There are versions of those batch methods that can use minibatches and they seem to work pretty well, despite lack of strong theoretical results. I guess though that this would fall into the category of what you are calling 'semi-stochastic'? Actually, having read further it appears that you would only call these stochastic if the size of the minibatch grows during optimization. Page 1: When you say that stochastic methods converge in less iterations that batch ones, this makes no sense. Perhaps you meant to say passes over the training set, not iterations. Page 2: The abstract made prominent mention of partition functions. Yet, the introduction doesn't make any mention of them, and seems to be describing a new optimization method for standard tractable objective functions. Your intro should mention that you will be focusing on generalized linear models (what you are calling generalized linear model) and extending the work of Jebra & Choromanska (2012) to the stochastic case. This becomes clear only once the author has read well passed the intro. Page 3: I think you should have some discussion of this Bound Computation subroutine. It looks quite mysterious. Does the order at which the loop goes over the different y's matter? I can't see why it wouldn't, and that seems problematic. Page 3: You don't define n. Is this the size of Omega? Page 3: 'dataset Omega'? I thought Omega was the set of values that y can take. Page 4: The statement and proof of Theorem 1 seems similar to one of the theorems from the Byrd et al (2011) paper you cite. And like that result, it is extremely weak. Basically all it says is that as long as the curvature matrices are not so badly behaved that the size of their inverses grow arbitrarily, multiplying the gradient by their inverses won't prevent gradient descent from converging, provided the learning rate becomes small enough to combat the finite amount of blowing up that there is. It says nothing about why you might actually *want* to multiply by the inverse of this matrix. But I guess in this general setting there is nothing stronger that can be shown, since in general, multiplying the the inverse curvature matrix computing on only a subset of the data may sometimes do a lot more harm than good. Page 6: I don't see why the first part of Cor 1 should be true. In particular, it isn't enough to for the matrix to be positive definite everywhere. It has to be positive definite with a lower bound on the smallest eigenvalue that works for all x (i.e. a uniform bound). Page 6: Theorem 2 should start 'there exists mu, L>0, and rho>0'. You are missing the rho. Page 6: This theorem doesn't seem to show that multiplying by the inverse curvature matrix is actually helping. One could just as easily prove a similar bound for standard SGD. Like this bound, it certainly wouldn't give linear convergence for SGD (an absurdity!), due to presence of the Ck term in the bound, which will eventually dominate in stochastic optimization. Page 6: Could you elaborate more on the point 'further regularizes the sub-problems'? There is a detailed discussion of this kind effect in 'Training Deep and Recurrent Neural Networks with Hessian-Free Optimization', section 8.7. What in particular does [38] say about this? Page 7: In section 12.1 from the above mentioned article, a similar result to Lemma 4 is proved. It is shown that the quadratic associated with the CG optimization is bounded, which implies the range result (since if the vector is not in the range, the optimization must be unbounded). They look at the Gauss-Newton matrix, but for general linear models, the Hessian has the same structure, or the matrix from the bound from Jebra & Choromanska (2012) has the same basic structure as this matrix.
DETu4zMyQH4kV
Semistochastic Quadratic Bound Methods
[ "Aleksandr Y. Aravkin", "Anna Choromanska", "Tony Jebara", "Dimitri Kanevsky" ]
Partition functions arise in a variety of settings, including conditional random fields, logistic regression, and latent gaussian models. In this paper, we consider semistochastic quadratic bound (SQB) methods for maximum likelihood inference based on partition function optimization. Batch methods based on the quadratic bound were recently proposed for this class of problems, and performed favorably in comparison to state-of-the-art techniques. Semistochastic methods fall in between batch algorithms, which use all the data, and stochastic gradient type methods, which use small random selections at each iteration. We build semistochastic quadratic bound-based methods, and prove both global convergence (to a stationary point) under very weak assumptions, and linear convergence rate under stronger assumptions on the objective. To make the proposed methods faster and more stable, we consider inexact subproblem minimization and batch-size selection schemes. The efficacy of SQB methods is demonstrated via comparison with several state-of-the-art techniques on commonly used datasets.
[ "methods", "comparison", "techniques", "variety", "settings", "conditional random fields", "logistic regression", "latent gaussian models", "semistochastic quadratic bound" ]
https://openreview.net/pdf?id=DETu4zMyQH4kV
https://openreview.net/forum?id=DETu4zMyQH4kV
WpCbeNsJXqeuj
review
1,390,940,460,000
DETu4zMyQH4kV
[ "everyone" ]
[ "Anna Choromanska" ]
ICLR.cc/2014/conference
2014
review: Dear readers and reviewers, we have just updated the paper on arxiv. In particular, we simplified Inequality (9) and Proof 6, clarified the statement of Theorem 2, and fixed accidental typos.
DETu4zMyQH4kV
Semistochastic Quadratic Bound Methods
[ "Aleksandr Y. Aravkin", "Anna Choromanska", "Tony Jebara", "Dimitri Kanevsky" ]
Partition functions arise in a variety of settings, including conditional random fields, logistic regression, and latent gaussian models. In this paper, we consider semistochastic quadratic bound (SQB) methods for maximum likelihood inference based on partition function optimization. Batch methods based on the quadratic bound were recently proposed for this class of problems, and performed favorably in comparison to state-of-the-art techniques. Semistochastic methods fall in between batch algorithms, which use all the data, and stochastic gradient type methods, which use small random selections at each iteration. We build semistochastic quadratic bound-based methods, and prove both global convergence (to a stationary point) under very weak assumptions, and linear convergence rate under stronger assumptions on the objective. To make the proposed methods faster and more stable, we consider inexact subproblem minimization and batch-size selection schemes. The efficacy of SQB methods is demonstrated via comparison with several state-of-the-art techniques on commonly used datasets.
[ "methods", "comparison", "techniques", "variety", "settings", "conditional random fields", "logistic regression", "latent gaussian models", "semistochastic quadratic bound" ]
https://openreview.net/pdf?id=DETu4zMyQH4kV
https://openreview.net/forum?id=DETu4zMyQH4kV
GXPfmkNE8um1e
review
1,391,723,820,000
DETu4zMyQH4kV
[ "everyone" ]
[ "anonymous reviewer 7e18" ]
ICLR.cc/2014/conference
2014
title: review of Semistochastic Quadratic Bound Methods review: The paper describes a second order stochastic optimization method where the gradient is computed on mini-batches of increasing size and the curvature is estimated using a bound computed on a possibly separate mini-batch of possibly constant size. This is clearly a state-of-the-art method. The authors derive a rather complete ensemble of theoretical guarantees, including the guarantee of converging with a nice linear rate. This is a strong paper about stochastic optimization. In the specific context of ICLR, I regret that the authors did not explain why this technique is useful to learn representations. The basic setup is that of maximum likelihood training of an exponential family model. In practice, there are many reasons to believe that such a technique would work on mixture models or models that induce representations (although the theory might not be as simple.) I believe that this paper should be accepted provided that the author pay at least some lip service to 'representation learning'.
DETu4zMyQH4kV
Semistochastic Quadratic Bound Methods
[ "Aleksandr Y. Aravkin", "Anna Choromanska", "Tony Jebara", "Dimitri Kanevsky" ]
Partition functions arise in a variety of settings, including conditional random fields, logistic regression, and latent gaussian models. In this paper, we consider semistochastic quadratic bound (SQB) methods for maximum likelihood inference based on partition function optimization. Batch methods based on the quadratic bound were recently proposed for this class of problems, and performed favorably in comparison to state-of-the-art techniques. Semistochastic methods fall in between batch algorithms, which use all the data, and stochastic gradient type methods, which use small random selections at each iteration. We build semistochastic quadratic bound-based methods, and prove both global convergence (to a stationary point) under very weak assumptions, and linear convergence rate under stronger assumptions on the objective. To make the proposed methods faster and more stable, we consider inexact subproblem minimization and batch-size selection schemes. The efficacy of SQB methods is demonstrated via comparison with several state-of-the-art techniques on commonly used datasets.
[ "methods", "comparison", "techniques", "variety", "settings", "conditional random fields", "logistic regression", "latent gaussian models", "semistochastic quadratic bound" ]
https://openreview.net/pdf?id=DETu4zMyQH4kV
https://openreview.net/forum?id=DETu4zMyQH4kV
Jxjf_jmDfngE0
review
1,391,906,880,000
DETu4zMyQH4kV
[ "everyone" ]
[ "anonymous reviewer a474" ]
ICLR.cc/2014/conference
2014
title: review of Semistochastic Quadratic Bound Methods review: The paper introduces a certain second-order method that is based on quadratic upper-bounds to convex functions and on slowly increasing the size of the batch. A few results show that the method is well-behaved and has reasonable convergence rates on logistic regression. This work is very similar to Hessian-free optimization, because it also uses CG to invert low-rank approximations to the curvature matrix, and it has comparable cost but greater memory complexity due to its need to store many parameter vectors. Likewise, it builds up on previous work that finds quadratic upper bounds to convex functions, but a quadratic upper bound seems restrictive, and perhaps a quadratic approximation would be more appropriate. Pros: Method is somewhat novel. Cons: - Experiments very small and unrealistic (sometimes tens of dimensions), and there is no comparison with Hessian-free optimization - Theorem 1 part 1 is wrong: the method is biased, because while E[Sigma_S] = Sigma, E[Sigma_S^{-1}] != Sigma^{-1}. In general, second order methods that use modest numbers of samples for the curvature matrix are necessarily biased. - The paper has two ideas: a certain second order method, and a simple scheme for growing the minibatch. But which of these is essential? Would we get similar results if we didn't grow the minibtach? How large would the minibatch end up being? What if the other methods grow their minibatch as well, do they become competitive?
DETu4zMyQH4kV
Semistochastic Quadratic Bound Methods
[ "Aleksandr Y. Aravkin", "Anna Choromanska", "Tony Jebara", "Dimitri Kanevsky" ]
Partition functions arise in a variety of settings, including conditional random fields, logistic regression, and latent gaussian models. In this paper, we consider semistochastic quadratic bound (SQB) methods for maximum likelihood inference based on partition function optimization. Batch methods based on the quadratic bound were recently proposed for this class of problems, and performed favorably in comparison to state-of-the-art techniques. Semistochastic methods fall in between batch algorithms, which use all the data, and stochastic gradient type methods, which use small random selections at each iteration. We build semistochastic quadratic bound-based methods, and prove both global convergence (to a stationary point) under very weak assumptions, and linear convergence rate under stronger assumptions on the objective. To make the proposed methods faster and more stable, we consider inexact subproblem minimization and batch-size selection schemes. The efficacy of SQB methods is demonstrated via comparison with several state-of-the-art techniques on commonly used datasets.
[ "methods", "comparison", "techniques", "variety", "settings", "conditional random fields", "logistic regression", "latent gaussian models", "semistochastic quadratic bound" ]
https://openreview.net/pdf?id=DETu4zMyQH4kV
https://openreview.net/forum?id=DETu4zMyQH4kV
dd1X31kbEJ3zP
review
1,390,940,460,000
DETu4zMyQH4kV
[ "everyone" ]
[ "Anna Choromanska" ]
ICLR.cc/2014/conference
2014
review: Dear readers and reviewers, we have just updated the paper on arxiv. In particular, we simplified Inequality (9) and Proof 6, clarified the statement of Theorem 2, and fixed accidental typos.
O_cyOSWv8TrlS
Neuronal Synchrony in Complex-Valued Deep Networks
[ "David Reichert", "Thomas Serre" ]
Deep learning has recently lead to great successes in tasks such as image recognition (e.g Krizhevsky et al., 2012). However, deep networks are still outmatched by the power and versatility of the brain, perhaps in part due to the richer neuronal computations available to real cortical circuits. The challenge is to identify which neural mechanisms are relevant, and to find suitable abstractions to model them. Here, we show how aspects of spike timing, long hypothesized to play a crucial role in cortical information processing, could be incorporated into deep networks to build richer, versatile deep representations. We introduce a neural network formulation based on complex-valued neuronal units that is not only biologically meaningful but also amenable to a variety of deep learning frameworks. Here, units are attributed both a firing rate and a phase, the latter indicating properties of spike timing. We show how this formulation qualitatively captures several aspects thought to be related to neuronal synchrony, including gating of information processing and dynamic binding of distributed object representations. Focusing on the latter aspect, we demonstrate the potential of the approach in several simple experiments. Thus, synchrony could implement a flexible mechanism that fulfills multiple functional roles in deep networks.
[ "deep networks", "neuronal synchrony", "spike timing", "great successes", "tasks", "image recognition", "krizhevsky et", "power" ]
https://openreview.net/pdf?id=O_cyOSWv8TrlS
https://openreview.net/forum?id=O_cyOSWv8TrlS
HkA9HPn1mY7Ol
review
1,392,059,880,000
O_cyOSWv8TrlS
[ "everyone" ]
[ "anonymous reviewer 0ae5" ]
ICLR.cc/2014/conference
2014
title: review of Neuronal Synchrony in Complex-Valued Deep Networks review: SUMMARY The paper 'Neural Synchrony in Complex-Valued Deep Networks' tackles the important question of how it is that neural representations encode information about multiple objects independently and simultaneously. The idea that the relative phase of periodic neural responses has been in circulation for some time (and this paper provides a good overview of relevant literature), but to date the principle has not gained traction in the pattern recognition side of neural modeling. This paper aims to change that by showing that a complex-valued Deep Boltzmann Machine can naturally segment images (in some simple synthetic cases) according to the various visible objects, through the phase of latent responses. The basis for the technical contribution of this paper is a novel response function. The [complex-valued] response function described by equations 1 and 2 is a function of a complex-valued weight vector applied to a complex-valued feature vector. Output z_j = r_j e^{i heta_j} of each model neuron is determined by arg(z_j) = arg(w . x) f( alpha |sum_j w_j . x_j | + eta sum_j (w_j |x_j|) ) Comment: The text leading up to equation 1 is confusing regarding z. Is it an output or an input? It doesn't seem like we're dealing with a dynamical system, and the input was called x in the paragraph above. Comment: the use of |x| in equation 2 is confusing because presumably |w . x| is a vector norm whereas in w . |x| it denotes elementwise magnitude of the complex elements of x. Right? Comment: the authos mention that the two terms in f() can be weighted, but don't include those weights in Eq. 2, as I have done above (alpha, eta). Why use this function? The authors make an intuitive argument in the text that these two terms capture salient aspects of a more detailed spiking network based on Hodgkin-Huxley neurons, and Figure 1 illutrates the effect quantitatively for a specific, simple 2-to-1 feedforward network of rhythmic neurons. The value of this transfer function as a surrogate for detailed compartmental models is interesting, but is not the focus of the remainder of the paper. The paper's section 3 'Experiments: the case of binding by synchrony' was somewhat difficult for me to understand. A [conventional, real-valued] DBM was trained on small pictures with horizontal and vertical bars, and then 'converted' to a complex-valued network (and was the activation function changed to the one from Eq. 2? What does that mean in terms of inference in the DBM?) It was found that when clamping the visible-unit magnitudes to a particular picture, and 'sampling' (is this actually sampling from a probability distribution?) their phase and the hidden units' magnitude and phase, that there were groups of hidden units with phases that lined up with particular bars. This is good because it suggests a means of teasing apart the DBM's latent representation into groups that are 'working together' to represent something independently from other groups. (I wanted to see some sort of control trial, showing that a plain old real-valued DBM could not achieve the same thing, but I can't really think of the right thing to try.) The demonstration in Figure 3 shows that already with this set of bars, that there is an issue of phase resolution: it appears that four different bars are all coded the same shade of green. Is this a problem? A readout mechanism might be confused and judge all these bars to be one object, even though bars occur independently in the training data. Figure 3 illustrates what happens after 100 iterations of sampling, what happens after more iterations? Do the co-incidentally green bars change color independently of one another? Overall, this research is highly relevant to the aims of the ICLR conference. It is at an early stage of development, in that no learning algorithm has been adapted to work with these complex-valued neurons (although the authors might consider adapting the ssRBM), the images used in the experiments are simple and synthetic, and the authors themselves lament that 'conversion' of a DBM is unreliable. Still, the idea of phase-based coding has a lot of potential, and it is worth exploring. This paper would be an important step in that process. I would strongly suggest that the authors upload their Pylearn2 code so that others can reproduce the effects presented in this paper, especially if training and conversion of DBMs is as unreliable as they suggest. NOVELTY AND QUALITY - the use of complex-valued phase to perform segmentation in a DBM is novel - quality of presentation is very good PRO & CON pro: phase-based segmentation is an intriguing idea from theoretical neuroscience, it's great to see it put to the test in engineering terms con: the stimuli are quite simple compared with other deep learning and vision applications con: the model is at an early proof-of-concept stage con: no natural learning algorithm yet for the model
O_cyOSWv8TrlS
Neuronal Synchrony in Complex-Valued Deep Networks
[ "David Reichert", "Thomas Serre" ]
Deep learning has recently lead to great successes in tasks such as image recognition (e.g Krizhevsky et al., 2012). However, deep networks are still outmatched by the power and versatility of the brain, perhaps in part due to the richer neuronal computations available to real cortical circuits. The challenge is to identify which neural mechanisms are relevant, and to find suitable abstractions to model them. Here, we show how aspects of spike timing, long hypothesized to play a crucial role in cortical information processing, could be incorporated into deep networks to build richer, versatile deep representations. We introduce a neural network formulation based on complex-valued neuronal units that is not only biologically meaningful but also amenable to a variety of deep learning frameworks. Here, units are attributed both a firing rate and a phase, the latter indicating properties of spike timing. We show how this formulation qualitatively captures several aspects thought to be related to neuronal synchrony, including gating of information processing and dynamic binding of distributed object representations. Focusing on the latter aspect, we demonstrate the potential of the approach in several simple experiments. Thus, synchrony could implement a flexible mechanism that fulfills multiple functional roles in deep networks.
[ "deep networks", "neuronal synchrony", "spike timing", "great successes", "tasks", "image recognition", "krizhevsky et", "power" ]
https://openreview.net/pdf?id=O_cyOSWv8TrlS
https://openreview.net/forum?id=O_cyOSWv8TrlS
ykGzyhr0mas8E
review
1,392,694,980,000
O_cyOSWv8TrlS
[ "everyone" ]
[ "David Reichert" ]
ICLR.cc/2014/conference
2014
review: We thank the reviewers for the fair reviews. For simplicity, below we refer to 'Anonymous 4c84', 'Anonymous ec9e', and 'Anonymous 0ae5' as reviewer 1-3, respectively. We agree with the overall assessment of the reviewers. This is early work that intends to communicate a potentially powerful idea, backed up by simple experiments. There are several avenues for extending it towards principled theoretical frameworks and to address e.g. the need for learning, and we were hoping for feedback from the community to that end. Perhaps the main issue, as raised by reviewer 1, is whether this work would be better suited to a short workshop paper, given its early stage. This is a fair point. At the same time, given the amount of material we had, and given the solid 'story' that we wanted to communicate, it really didn't make sense to us not to write a full-length paper. For a conference track submission, we think the strengths of the work, such as its originality (to this audience) and quality of presentation, could overcome its shortcomings (but of course that's ultimately up to the reviewers and the chair to decide). We can also address several of the concrete concerns raised by the reviewers, and will do so in the following. ****** Changes to paper ****** We are in the process of uploading an updated paper (v4, ETA Feb 18 7pm EST) to the arXiv, which takes the reviewers' comments into account. We clarified notation in the main text (now slightly breaking the 9 pages limit, which should be acceptable at this point) and wording in appendix B. We also added some of the main issues raised by the reviewers and our discussion thereof as appendix C, whenever it made sense to expand on points that were only briefly touched on in the main text. ****** Reviewer 1 & Reviewer 2 ****** 'It is not clear that the segmentation is working for the bars experiment because multiple bars are colored by the same phase. What is the goal here? that each bar has a different, unique phase value? Does the underlying phase distribution effectively partition the bars? ' 'there is an issue of phase resolution: it appears that four different bars are all coded the same shade of green. Is this a problem? A readout mechanism might be confused and judge all these bars to be one object, even though bars occur independently in the training data.' This is a very important issue that we are still considering. It is perhaps an issue more generally with the underlying biological theories rather than just our specific approach. As we noted in the paper, some theories pose that a limit on how many discrete objects can be represented in an oscillation cycle, without interference, explains certain capacity limits in cognition. The references we cited (Jensen & Lisman, 2005; Fell & Axmacher, 2011) refer to working memory as an example (often 4-7 items; note the number of peaks in Figure 3c -- obviously this needs more quantitative analysis). We would posit that, more generally, analysis of visual scenes requiring the concurrent separation of multiple objects is limited accordingly (one might call this a prediction -- or a `postdiction'? -- of our model). The question is then, how does the brain cope with this limitation? As usual in the face of perceptual capacity limits, the solution likely would involve attentional mechanisms. Such mechanisms might dynamically change the grouping of sensory inputs depending on task and context, such as whether questions are asked about individual parts and fine detail, or object groups and larger patterns. In the bars example, one might perceive the bars as a single group or texture, or focus on individual bars as capacity allows, perhaps relegating the rest of the image to a general background. Dynamically changing phase assignments according to context, through top-down attentional input, should, in principle, be possible within the proposed framework: this is similar to grouping according to parts or wholes with top-down input, as in the experiment of Section 3.2. ****** Reviewer 1 ****** Regarding the similarity to Rao et al: as we've acknowledged in the paper, the work is similar in several points (we arrived at our framework and results independently and were not aware of Rao et al.'s work initially -- we do not think the latter is particularly well known in this community). However, we would want to counter the impression that our work does not provide additional contributions. First of all, to clarify the issue of training on multiple objects: in Rao et al.'s work, the training data consists of a small number of fixed 8x8 images (N <= 16 images *in total* for a dataset), containing simple patterns (one example has 4 small images with two faces instead). To demonstrate binding by synchrony, two of these patterns are superimposed during test time. We believe that going beyond this extremely constrained task, in particular showing that the binding can work when trained and tested on multiple objects, on multiple datasets including MNIST containing thousands of (if simple) images, is a valid contribution from our side, which is not diminished by the fact that this result relies on the capability of the DBM (indeed, showing that this works with the DBM is itself a contribution as it might tell us something interesting about the kind of representations learned by a DBM that is not usually made explicit). Similarly, as far as we can see, Rao et al. do not discuss the gating aspect at all (as we mentioned in our paper), nor the specific issues with excitation and inhibition (Section 2.1) that we pointed out as motivation for using both classic and synchrony terms. Lastly, the following issues are addressed in our experiments only: network behavior on more than two objects; synchronization for objects that are not contiguous in the input images, as well as part vs. whole effects (Section 3.2); decoding distributed hidden representations according to phase (Section 3.3; in particular, it seems to be the case that Rao et al.s networks had a localist (single object<->single unit) representation in the top hidden layer in the majority of cases). 'the introduction of phase is done in an ad-hoc way, without real justification from probabilistic goals [...]' We agree that framing our approach as a proper probabilistic model would be helpful and perhaps more convincing to this audience (e.g. using an extension of the DUBM of Zemel et al., 1995, as discussed in the paper). At the same time, we think there is value to presenting the heuristic as is, based on a specific neuronal activation function, to emphasize that this idea could find application in neural networks more generally, not only those with a probabilistic interpretation/Boltzmann machines (that our approach is divorced from any one particular model is another difference when compared to Rao et al.'s work). In particular, we have performed exploratory experiments with networks trained (pretrained as real-valued nets or trained as complex-valued nets) with backprop, including (convolutional) feed-forward neural networks, autoencoders, or recurrent networks, as well as a biological model of lateral interactions in V1. We agree with the reviewer that a more rigorous mathematical and quantitative analysis is needed in any case. 'the results in the appendix appear to indicate that the approach is not working very well in general, and the best results are the ones shown in the main text.' We are not sure what exactly the reviewer is referring to here. If it is our statement that our approach of using pretrained real-valued networks does not always work, then yes, that is an issue that needs to be addressed. However, we should perhaps clarify that what we meant is: for some datasets and training parameters, models did not perform well; in cases where they did perform reasonably well however, that performance was relatively consistent across images, and the results show representative examples from those models. If, on the other hand, the reviewer is referring to results in the supplementary figure supposedly looking different from the figures in the main text, then no, other than perhaps with the schematic overview in Figure 6, we did not purposefully cherry-pick nicer looking results to display (most of the figures were actually cropped from the same larger figures simply for space reasons). 'How is the phase distribution segmented? Phase is a continuous variable, and the segmentation/partitioning seems to be done by hand for the examples. This needs to be addressed.' Partitioning was done with k-means, not by hand...? Other options are possible (also depending on whether the aim is a principled machine learning framework or addressing questions about the brain with a biological model). It is true that phase is a continuous variable, however, our results indicate that there is a tendency to form discrete phase clusters, in line with biological models (e.g. the one of Miconi & VanRullen). 'Also, what about the overlaps of the bars? these areas seem to be mis- or ambiguously labeled. is this a bug or a feature?' This is more of a problem with the task itself being ill-defined on binary images, where an overlapping pixel cannot really be meaningfully said to belong to either object alone (as there is no occlusion as such). We plan to use (representations of) real-valued images in the future. ****** Reviewer 2 ****** 'Comment: The text leading up to equation 1 is confusing regarding z. Is it an output or an input? It doesn't seem like we're dealing with a dynamical system, and the input was called x in the paragraph above. ' With x and z we just refer to, respectively, real-valued and complex-valued states in general. There are several notions of input here: the input (image) to the overall network, the units/states providing input to a specific unit, the total 'post-synaptic' input w . z, and the term that is ultimately used as input to the activation function with a real-valued domain (e.g. |w . z|). We have attempted to clarify this in this revision of the paper by introducing some additional variables (perhaps the reviewer could check whether Section 2.1 is clearer now). 'Comment: the use of |x| in equation 2 is confusing because presumably |w . x| is a vector norm whereas in w . |x| it denotes elementwise magnitude of the complex elements of x. Right?' Assuming the reviewer meant to write z not x: No, |w . z| is also the magnitude, and w . z happens to be a complex scalar; this is the input to a single unit, thus both w and z are vectors and this is a dot product. This should be clearer with the new notation. 'Comment: the authors mention that the two terms in f() can be weighted, but don't include those weights in Eq. 2, as I have done above (alpha, eta).' We simply left this out for simplicity, because we do not actually explore unbalanced weightings in this paper and didn't want to introduce unnecessary notation. 'A [conventional, real-valued] DBM was trained on small pictures with horizontal and vertical bars, and then 'converted' to a complex-valued network (and was the activation function changed to the one from Eq. 2? What does that mean in terms of inference in the DBM?)' Yes the activation function changed; we essentially use the normal DBM training as a form of pretraining for the final, complex-valued architecture. The resulting neural network is likely not exactly to be interpreted as a probabilistic model. However, if such an interpretation is desired, our understanding is that running the network could be seen as an approximation of inference in a suitably extended DUBM (by adding an off state and a classic term; refer to Zemel et al., 1995, for comparison). For our experiments, we used two procedures (with similar outcomes) in analogy to inference in a DBM: either sampling a binary output magnitude from f(), or letting f() determine the output magnitude deterministically; the output phase was always set to the phase of the total input. The first procedure is similar to inference in such an extended DUBM, but, rather than sampling from a circular normal distribution on the unit circle when the unit is on, we simply take the mode of that distribution. The second procedure should qualitatively correspond to mean-field inference in an extended DUBM (see Eqs. 9 and 10 in the DUBM paper), using a slightly different output function. By the way: perhaps we could have framed our work in such terms to begin with, but in a way that obscures what our original line of thinking was. '[...] 'sampling' (is this actually sampling from a probability distribution?)' No, not exactly. We actually only used the term 'sampling' in the standard, real-valued case, other than in the caption of Figure 3. We will fix the latter. 'Figure 3 illustrates what happens after 100 iterations of sampling, what happens after more iterations? Do the co-incidentally green bars change color independently of one another?' Phase assignments appear to be stable (see the supplementary movies), though we did not analyze this in detail. It should also be noted that the overall network is invariant to absolute phase, so only the relative phases matter. 'I would strongly suggest that the authors upload their Pylearn2 code so that others can reproduce the effects presented in this paper [...]' We are happy to publish the code either way, but it would unfortunately take some extra work to put it into a form that is accessible to others. We will do so if the paper gets accepted as a proper conference paper. ****** Reviewer 3 ****** Unfortunately, we certainly can't lay claim to being the first to explore this idea in a computational framework (see the references cited), though we are perhaps the first to make a connection to the types of deep networks that have recently been employed in the deep learning community (DBMs in this case; also, as stated, the framework could in principle be applied to other deep nets, such as ConvNets). Apart from that, we are of course happy to agree that this a fascinating idea and that it seems worthwhile to bring it to the attention of the ICLR community.
O_cyOSWv8TrlS
Neuronal Synchrony in Complex-Valued Deep Networks
[ "David Reichert", "Thomas Serre" ]
Deep learning has recently lead to great successes in tasks such as image recognition (e.g Krizhevsky et al., 2012). However, deep networks are still outmatched by the power and versatility of the brain, perhaps in part due to the richer neuronal computations available to real cortical circuits. The challenge is to identify which neural mechanisms are relevant, and to find suitable abstractions to model them. Here, we show how aspects of spike timing, long hypothesized to play a crucial role in cortical information processing, could be incorporated into deep networks to build richer, versatile deep representations. We introduce a neural network formulation based on complex-valued neuronal units that is not only biologically meaningful but also amenable to a variety of deep learning frameworks. Here, units are attributed both a firing rate and a phase, the latter indicating properties of spike timing. We show how this formulation qualitatively captures several aspects thought to be related to neuronal synchrony, including gating of information processing and dynamic binding of distributed object representations. Focusing on the latter aspect, we demonstrate the potential of the approach in several simple experiments. Thus, synchrony could implement a flexible mechanism that fulfills multiple functional roles in deep networks.
[ "deep networks", "neuronal synchrony", "spike timing", "great successes", "tasks", "image recognition", "krizhevsky et", "power" ]
https://openreview.net/pdf?id=O_cyOSWv8TrlS
https://openreview.net/forum?id=O_cyOSWv8TrlS
au6kl4HEAJuS5
review
1,390,026,840,000
O_cyOSWv8TrlS
[ "everyone" ]
[ "Sainbayar Sukhbaatar" ]
ICLR.cc/2014/conference
2014
review: I found this paper very interesting and inspiring.
O_cyOSWv8TrlS
Neuronal Synchrony in Complex-Valued Deep Networks
[ "David Reichert", "Thomas Serre" ]
Deep learning has recently lead to great successes in tasks such as image recognition (e.g Krizhevsky et al., 2012). However, deep networks are still outmatched by the power and versatility of the brain, perhaps in part due to the richer neuronal computations available to real cortical circuits. The challenge is to identify which neural mechanisms are relevant, and to find suitable abstractions to model them. Here, we show how aspects of spike timing, long hypothesized to play a crucial role in cortical information processing, could be incorporated into deep networks to build richer, versatile deep representations. We introduce a neural network formulation based on complex-valued neuronal units that is not only biologically meaningful but also amenable to a variety of deep learning frameworks. Here, units are attributed both a firing rate and a phase, the latter indicating properties of spike timing. We show how this formulation qualitatively captures several aspects thought to be related to neuronal synchrony, including gating of information processing and dynamic binding of distributed object representations. Focusing on the latter aspect, we demonstrate the potential of the approach in several simple experiments. Thus, synchrony could implement a flexible mechanism that fulfills multiple functional roles in deep networks.
[ "deep networks", "neuronal synchrony", "spike timing", "great successes", "tasks", "image recognition", "krizhevsky et", "power" ]
https://openreview.net/pdf?id=O_cyOSWv8TrlS
https://openreview.net/forum?id=O_cyOSWv8TrlS
QuNxu31HJPlct
review
1,391,115,900,000
O_cyOSWv8TrlS
[ "everyone" ]
[ "anonymous reviewer 4c84" ]
ICLR.cc/2014/conference
2014
title: review of Neuronal Synchrony in Complex-Valued Deep Networks review: The paper describes a method to augment pre-trained DBMs with phase variables and shows some demonstrations of binding, segmentation, and partitioning (based on latent variables). Pros: The paper is very well written and introduces a number of concepts clearly. Phase is a curious neurophysiological phenomena and is deserving of modeling that addresses the representational consequences/implications. I could see how this ad-hoc approach could be used to understand DNNs (like extended Zeiler and Fergus' visualization work). The paper may educate the ICLR community on the binding problem and proposals from the neuroscience community that argue for phase as a solution to this problem. Cons: A major issue with the described work is its similarity to the work for Rao and colleagues. The authors provide some comments about how their work is distinguished. However these are merely rhetorical (your approach is general and theirs is not) or actually contributions that are not made by this paper but by previous work (DBMs can be trained to learn representations of multiple-simultaneously presented object/patterns). The two major limitations of the paper in its current form: 1. the introduction of phase is done in an ad-hoc way, without real justification from probabilistic goals. The authors appear surprised that their hack worked at all. It seems the more rigorous approach would be to either introduce phase as a proper latent variable and train the network to optimize the distribution to match the data distribution (the usual approach to modeling), or to explain more rigorously why this ad hoc extension does not interfere with the network (however it is not even clear from the experiments that the ad-hoc model preserves the properties of the original network). A more rigorous mathematical approach might reveal that the basins of attraction are preserved with the introduction of phase, or that the phase variables are independent of the amplitude variables (I believe they are not). 2. The results of the experiments are mostly just pictures and lack quantitative assessment or any controls. The resulting work provides a demonstration of the phase idea for binding/grouping/segmentation, which are not new ideas (although they are probably new ideas to the ICLR community). Furthermore, the results in the appendix appear to indicate that the approach is not working very well in general, and the best results are the ones shown in the main text. The procedures avoid some obvious issues: How is the phase distribution segmented? Phase is a continuous variable, and the segmentation/partitioning seems to be done by hand for the examples. This needs to be addressed. It is not clear that the segmentation is working for the bars experiment because multiple bars are colored by the same phase. What is the goal here? that each bar has a different, unique phase value? Does the underlying phase distribution effectively partition the bars? Also, what about the overlaps of the bars? these areas seem to be mis- or ambiguously labeled. is this a bug or a feature? Overall, I think this is an interesting direction of research and the exposition is top notch, however the underlying work falls short of some obvious extensions and methodological rigor. I think this would make for a nice workshop paper, so that it could receive some feedback from the community and educate the community of the phase binding idea, but it lacks some ingredients for a conference paper. Some other relevant references you might want to include: S. Jankowski, et al. 1996. Complex-valued multistate neural associative memory. T. Nitta. 2009. Complex-Valued Neural Networks: Utilizing High-Dimensional Parameters. C. Cadieu & K. Koepsell 2010. Modeling Image Structure with Factorized Phase-Coupled Boltzmann Machines.
O_cyOSWv8TrlS
Neuronal Synchrony in Complex-Valued Deep Networks
[ "David Reichert", "Thomas Serre" ]
Deep learning has recently lead to great successes in tasks such as image recognition (e.g Krizhevsky et al., 2012). However, deep networks are still outmatched by the power and versatility of the brain, perhaps in part due to the richer neuronal computations available to real cortical circuits. The challenge is to identify which neural mechanisms are relevant, and to find suitable abstractions to model them. Here, we show how aspects of spike timing, long hypothesized to play a crucial role in cortical information processing, could be incorporated into deep networks to build richer, versatile deep representations. We introduce a neural network formulation based on complex-valued neuronal units that is not only biologically meaningful but also amenable to a variety of deep learning frameworks. Here, units are attributed both a firing rate and a phase, the latter indicating properties of spike timing. We show how this formulation qualitatively captures several aspects thought to be related to neuronal synchrony, including gating of information processing and dynamic binding of distributed object representations. Focusing on the latter aspect, we demonstrate the potential of the approach in several simple experiments. Thus, synchrony could implement a flexible mechanism that fulfills multiple functional roles in deep networks.
[ "deep networks", "neuronal synchrony", "spike timing", "great successes", "tasks", "image recognition", "krizhevsky et", "power" ]
https://openreview.net/pdf?id=O_cyOSWv8TrlS
https://openreview.net/forum?id=O_cyOSWv8TrlS
0oL8oJVsYd0Xl
review
1,389,649,440,000
O_cyOSWv8TrlS
[ "everyone" ]
[ "David Reichert" ]
ICLR.cc/2014/conference
2014
review: Thanks for the comment. Just to clarify, in the broader context there is plenty of relevant work that we did not discuss, due to limited space (we only discussed closely related work based on complex-valued nets). This includes models using coupled oscillators for segmentation. In particular, see also (and references therein): Yu, G., & Slotine, J.-J. (2009). Visual Grouping by Neural Oscillator Networks. IEEE Transactions on Neural Networks, 20(12), 1871–1884. doi:10.1109/TNN.2009.2031678